The idea of artificial intelligence, first coined in 1956, has dominated popular film (think the Matrix Trilogy or Stanley Kubrick’s 2001: A Space Odyssey) and ethical debate — which, in the UK, is addressed by the National Centre for Data Ethics and Innovation, which aims to position the UK as a world-leading force for the future of AI.
This public body can’t address the potential problem of ethical AI alone. To ensure that AI develops as a force for good, industry collaboration is required.
An ethical framework
Digital Catapult has released its first Ethics Framework as a means to integrate ethical practice into the development of artificial intelligence and machine learning technologies. The organisations has invited AI companies to test this framework.
The initiative comes at a time when AI’s rapid evolution and widespread application – e.g. to determine from a photograph whether a person is gay or straight, is raising ethical questions.
>Read more on Artificial intelligence: Data will be the differentiator in the marketplace
Digital Catapult hopes that by delivering this ethical framework, it can provide a practical approach for start-ups and experts developing AI solutions, to increase the consideration of ethics within AI development, making the UK a trusted centre for the industry.
“At this very early stage in its development, presenting industry with a set of enforceable rules might deter companies from trying to navigate and understand what responsible AI looks like in practice,” said Dr Jeremy Silver, CEO, Digital Catapult. “The Ethics Framework has been designed to provide guidance for companies to explore what role ethics might play in their projects and to help share best practice across industry.”
A central consideration
Ethics should be a central consideration for companies and individuals developing AI, in order to create positive meaningful change for society. Those who have considered the ethical implications of their products and services, and who monitor, manage and communicate effectively about them, will have a competitive edge.
>Read more on ‘AI causes new challenges for research ethics at universities’
Startups are the primary target audience for the Ethics Framework. According to Digital Catapult, startups are perhaps the hardest testing ground for ethical tools since they have little time or resource for unnecessary work or abstract discussion.
Conversely, startups and scaleups can more readily reap the potential rewards, since they are unencumbered with legacy infrastructure and can embed ethical practices right from the beginning of their business growth.
The Ethics Framework will initially be offered to companies on Digital Catapult’s Machine Intelligence Garage programme to see how it can be applied to the systems that these startups are already in the process of creating.
>Read more on A guide to artificial intelligence in enterprise: Is it right for your business?
Dr Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, and Chair of the Ethics Committee commented: “It’s been well discussed that the ethical application of artificial intelligence is one of the pressing questions of our time. The first Ethics Framework provides a guiding light for the ethical and responsible development of AI. It is critical that businesses of all sizes start to think and act ethically and responsibly.”
“We know from working directly with early stage AI companies that a highly practical framework, which speaks in the language and caters to the needs of those developing AI enabled products or services, is required at this crucial stage in the formation of the UK’s already remarkable AI community. The new Ethics Framework is a robust enabler for innovation that will decrease risks and opportunity costs.”
Digital Catapult’s Machine Intelligence Garage Ethics Committee is an independent body dedicated to realising responsible artificial intelligence development in the UK. It was established in July 2018 and includes some of the foremost minds in AI and data ethics.
The Committee is made up of a Steering Group, who will oversee the development of tools to facilitate responsible AI in practice, and an Advisory Group, who will work closely with startups developing their propositions through Digital Catapult’s Machine Intelligence Garage programme.