As a Berlin-based startup that develops computer vision software, we broadly welcome the rules and actions for the development and use of AI proposed by the EU. By focusing on building trust between software companies, business users and citizens, the EU has grasped the essence of what is needed to drive innovation and growth in this field.
The timing is also important. The EU is demonstrating leadership at a time when economies are suffering a severe reaction to the coronavirus pandemic, and a flourishing AI and tech sector could be the adrenalin shot needed to put the economy back on its feet.
But like any proposal – or medicine – we should be wary of the side effects. The EU is most concerned about ‘high-risk’ activities: “High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users.” Humans still programme and train these systems. Even with all the necessary checks and balances in place, mistakes will be made.”
EU artificial intelligence guidelines will help unlock potential of AI technology
The importance of human oversight
In nearly all cases, AI and machine learning algorithms will return a certain number of false positives. This makes it necessary to enforce human oversight, and therefore accountability, into high-risk tasks that have implications for our social, legal and civil rights.
But how do you define a high-risk task? Among the proposed areas are “those … that manipulate human behaviour, opinions or decisions… causing a person to behave, form an opinion or take a decision to their detriment.”
What does it mean to manipulate human behaviour to one’s detriment, and how can this be measured? It is all very well to cite “toys using voice assistance encourage dangerous behaviour of minors.” But what about a political manifesto? Or even this article? Could a repressive regime clamp down on opposition parties if they used AI in a campaign to influence voters?
Looking at it another way, is AI’s potential influence on human behaviour more in need of regulation due to its efficient and automated nature? By this logic, should content printed in books be less regulated than those spread through more pervasive channels, such as the internet and mobile advertisements?
How does the Microsoft Office of Responsible AI ensure compliance?
A race to avoid red tape
We strongly recommend not to tie these regulations to specific technical requirements, which often prove arbitrary when measuring efficacy. In our experience, metrics such as the number of data points or the quality of specific data are often unrelated causally to the performance of an AI model. Besides the algorithm and methodologies can be modified to meet these requirements without preventing any underlying risks. Additionally, in a field where algorithms and methodologies change at a rapid rate, this puts unreasonable responsibility for complex technology judgements on the legislator.
As a startup we also hope that the regulations do not burden small businesses unfairly. For example, there is some evidence that GDPR, which is a very well intentioned piece of legislation actually rewards the big tech companies that it set out to regulate. If we want to avoid a dystopian future where technology and power remains concentrated in the hands of a few global organisations, we must ensure that new market entrants have the freedom to innovate and compete, without being held back by superfluous red tape.
As a commercial business, we also have an interest in making sure that new rules don’t hold back demand for AI-based solutions. While we are fortunate enough to have several large customers, we are equally concerned that smaller organisations might delay technology acquisitions while awaiting agreement between the European Parliament and the member states.
The 3rd anniversary of GDPR: the drop that digs the rock, hopefully!
Protecting values, powering innovation
We should also look out for the unintended consequences of any new legislation. Larger businesses are usually on the receiving end of the heaviest fines. However, they are better equipped to take risks and test boundaries as they have the legal and financial resources to mitigate rulings and absorb penalties. Businesses of any size should be able to experiment and push the limits within reason without fear of incurring fines of up to €36m or 6% of turnover.
Finally, we also put faith in the commercial sector to pre-empt many of the recommendations described by the proposal. While big names such as IBM and Microsoft are building AI ethical standards into their operations and products, smaller organisations are already playing their part.
We are all striving towards the same goal. Nobody wants to live in a society where AI jeopardises human rights or gives big business a permanent advantage over the competition. But at the same time, we must ensure that new rules do not jeopardise Europe’s AI potential. This does not mean that we need to compromise our democratic and commercial values. What we require in its place are ongoing, nuanced discussions that guarantee individual freedoms, while enabling innovation to flourish.
Written by Appu Shaji, CEO and chief scientist at Mobius Labs