Regulating AI; what could be so hard?
The European Union recently published seven guidelines for the development and implementation of AI ethics as part of its 2018 AI strategy, which is targeting investment of €20 billion in AI annually over the next decade. Now the World Economic Forum has followed suit, confirming its intention to develop global rules for AI and create an AI Council that will aim to find common ground on policy between nations on the potential of AI and other emerging technologies.
The trouble is that regulations designed to breathe life into the AI dream could in fact do the opposite if not approached with care and due diligence. Regulatory compliance is important across the spectrum of business, from retail to banking, but it is also incredibly complicated, especially with new technologies being implemented into business models on an almost daily basis.
Due to constantly requiring updates and revisions, regulatory bodies’ ability to enforce such rules are becoming harder and harder. A similar scenario can be said for AI. With the hype around AI reaching full throttle, measures to make its use fair and ethical are being made. However, the technology is constantly evolving so these regulations are quickly becoming obsolete. This means that AI will never truly be regulated unless steps are taken to make sure that any iteration of it is covered.
Given that the UK government alone plans to raise research and development investment to 2.4% of GDP by 2027 under the ‘AI Sector Deal’, the time to talk about AI regulation is now. However, talking about regulating AI as a technology would be detrimental to societal progression and it would prove difficult for the government to stop its implementation. However, regulations around its application could prove vital in the future.
Regulating robots – keeping an eye on AI
Regulate the applications, not the technology
AI will continue to make drastic improvements and advancements across many sectors which will have a profound impact on society. The use of AI within healthcare will see medical research and trials conclude quicker. Transportation is set to change with self-driving vehicles and smart roads which will contribute to creating smart cities. AI can help to predict natural disasters which will help to reduce those affected which currently stands at 160 million people worldwide each year. Real-time data can aid farming and address agricultural productivity to help provide for the growing population. We are set to benefit economically as the use of AI within businesses evolve as does the job field.
On the opposite end of the spectrum there is AI bias, accelerated hacking and AI terrorism. This is where big challenges await both government institutions and legal organisations as they tackle the larger issues that can arise from misuse of the technology.
With AI predicted to be beneficial across a wide spectrum of different applications, it would make more sense for the applications themselves to be regulated. The application requirements for AI in healthcare is different from banking, for example – the ethical, legal and economic issues around issuing medication to patients is far different from the transaction of money!
On the other hand, regulating AI as a whole will mean that its use will be more limited in certain industries over others, which means that there would be barely any point in businesses implementing automation or AI into their business models.
Increasing the adoption of ethics in artificial intelligence
Industry expert advice will be needed to regulate properly
In order to properly regulate, governments and policymakers will need to work closely with professional bodies from each industry, who can advise the decision makers on policy and regulation on best practice with regard to what the technology is needed for, how they’ll make it work, how it may impact their workforce, whether their workforce will need retraining, and what support they need from the government to ensure a smooth transition into an ‘AI ON’ business.
Given that the UK is currently trialling retraining programmes for workers who may be displaced due to automation, companies should be optimistic that the decision makers at policy level will listen to their concerns and regulate the applications effectively, rather than narrowing its usage by drawing a blanket regulation on the whole technology.
Protecting people from malicious intent
Another important factor that governments and businesses will need to be aware of will be in devising methods to prevent the rise of AI used with malicious intent, ie. for hacking or fraudulent sales. Most cyber-experts predict that cyber attacks powered by AI will be one of the biggest challenges of the 2020s, which means that regulations and preventative measures should be implemented as with any other industry: designed specifically for the application, as if blanket regulation was applied to AI then the means to prevent these attacks would be insufficient.
Stringent qualification processes will also need to be addressed for certain industries. For example, Broadway has been driving ticket sales through an automated chatbot, with the show Wicked boasting ROI increases of up to 700%. This has also allowed them to sell on tickets for 20% higher than their average weekly price. Regulations will need to address the fact that AI and bots have the potential to take advantage of consumers wallets, which means that policymakers will need to work closely with firms that are gradually beginning to rely on chatbots to make sure that consumer rights are not being breached. This must be done whilst implementing strict qualification processes to make sure that chatbots and AI are put through the test before being implemented into a business model.
2020 and beyond
As we plough ahead into the 2020s, the only way we can realistically see AI and automation take the world of business by storm is if it is smartly regulated. This begins with incentivising further advancements and innovation to the tech, which means regulating applications rather than the tech itself.
Whilst there is a great deal of unwarranted fear around AI and the potential consequences it may have for businesses’ workforces, we should be optimistic about a future where AI is ethical and useful, in a world where workers displaced by automation have been re-trained to be more suited for other, more important roles.