This week the House of Lord’s Select Committee on Artificial Intelligence (AI) released a report stressing that ethics should be at the centre of AI – a report which has so far been welcomed by the IT industry.
In July 2017 a 13 strong Committee was tasked to assess the economic and social impact of artificial intelligence. 23 pieces of written evidence were collected along with visits to technology companies, to enable the panel to draw up a set of proposed principles for an industry code of practice.
The aim of the cross-sector code of practice is to ensure AI is developed ethically and does not diminish the rights and opportunities of humans.
The report states that AI should be developed for the “common good and benefit of humanity” and operate on principles of “intelligibility and fairness”.
>See also: Is your artificial intelligence right for you?
Educating citizens alongside AI
The Committee’s report noted that each citizen should be given the right to be educated to a level where they can “flourish mentally, emotionally and economically” alongside AI technology in the jobs of the future.
AI technology firm Pegasystems welcomed the report and Dave Wells, the Head of its European business said the report also confirmed the need for educating the human workforce to work alongside AI: “We welcome the fact that the House of Lords has identified the need to educate people to co-work with bots and AI in the workplace.
Our own research has revealed most senior executives in British business are expecting the term ‘workforce’ to cover both intelligent machines and their human colleagues within a short period of time, so the report’s recommendations could become reality sooner than we think. The same business leaders believe pairing up humans and machine intelligence will create a better, happier and more effective workforce.
>See also: The evolution of artificial intelligence
The obstacles to getting on with your bot work pal may also not be as hard as some pessimists suggest. As consumers we’re getting more comfortable with a company using AI to provide better customer service, so why not at work too? AI already helps staff make better decisions on customer needs, eliminates tech drudgery and opens up time for creativity. But, AI tech companies like ourselves can’t take for granted the need to help people to develop the skills they need to work alongside AI and do jobs that simply aren’t defined today.”
UK on the AI global stage
David Emm, Principal Security Researcher at Kaspersky Lab, highlighted the Committee’s point about the UK’s potential to be a major technology player in AI.
He agreed with the report which suggested restrictions on any AI systems attempting to “diminish the data rights or privacy of individuals, families or communities”.
Emm said that whilst the UK has the opportunity to become a world leader in the development of AI, such new technologies should not come at the price of data rights or privacy of individuals, families or communities.
>See also: Can artificial intelligence be trusted – and is it too late to ask?
He explained: “The use of technology brings great benefits – especially so in the case of artificial intelligence and the opportunities this presents. Consumers are clearly prepared to trade their data for the convenience of access to a free product or service. Moreover, the proliferation of smart devices in the home and elsewhere means that more and more personal data can be casually captured and used. However, this shouldn’t come at the expense of people’s privacy or security.
Consent is a key factor here – ensuring that people offer informed consent before their data is captured, used or passed to third parties. It’s to be hoped that GDPR and its application by the ICO (and similar bodies in other countries) will ensure that this is done. Ethics in AI shouldn’t be an afterthought, regardless of how ‘smart’ a system is. It’s also important to remember that the same data is valuable to cybercriminals. So, it’s also essential that companies that hold data, of whatever kind, take the necessary steps to secure it.”
AI and law obligations
The report called for a ban surrounding AI technology that had the potential to “hurt, destroy, or deceive human beings”.
It stated: “AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these.”
“An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”
>See also: Artificial intelligence: What changed in 2017 and what to expect in 2018
Mark Deem, a Partner at law firm Cooley, provided evidence to the Committee on legal liability. He stressed the need for an appropriate legal and regulatory framework, with the support of the Law Commission and technologists.
Commenting on the report he said: “As we made clear in our evidential submissions – and as referred to by the Committee (p95, para 305) – as AI technology develops it will challenge the underlying basis of a number of legal obligations according to our present concepts of private law (whether contractual or tortious).
The recommendation of the Committee therefore to refer the matter to the Law Commission to provide clarity to the adequacy of the legal framework is clearly sensible.
However, it is important that any consideration of this framework is not undertaken in a jurisdiction silo or seen as a purely academic, legal exercise – but involves input from technologists, those looking to invest in this area and the legal profession alike.
To harness the power of this technology requires the establishment of an appropriate legal and regulatory framework, which balances the innovative and entrepreneurial aspirations of key stakeholders with the implementation of a safety net of protections should systems malfunction.”