The upcoming EU whitepaper on AI to be discussed at the European Parliament headquarters in Brussels will come along with other pending big tech initiatives, including a single European Data Market and rules for digital services.
The discussion will be had by the Industry and Internal Market Committees, and the EU’s Industry Commissioner, Thierry Breton, said that the establishment of new rules within the bloc “will make sure that the individual and fundamental rights that we cherish in Europe are respected”.
Propositions for AI regulation include testing in a similar way to how cars and chemicals are checked, and rules that are made in an aim to create an ‘ecosystem of trust’.
The ethical matters to be considered include facial recognition, which the EU Commission said will be liased on with the public within the following 12 weeks after talks have concluded.
Mark Zuckerberg, CEO of Facebook, visited the EU this week to discuss AI regulation, and has called for more accountability for companies who breach the rules.
How should we go about establishing strong AI regulation?
As well as stressing the importance of ethical AI, AntWorks CEO Asheesh Mehra also said that accountability was something to keep in mind.
“With such huge investment being made to lay the groundwork for AI, the EU now have a responsibility to construct a framework upon which AI can be built,” he said. “Some people, including myself, refer to that responsibility as ethical AI, and this means that companies and governments need to be accountable.
“Legislators and regulators have a key role to play in the area of AI accountability. That involves specifying the applications for which AI can and cannot be used, but AI as a technology should not be regulated.
“Instead, there should be governmental regulations put into place to help standardise how AI technology can be used. For example, regulations should indicate that applying AI is appropriate for particular purposes in specific industries, while other laws or rules should make clear what applications of AI are not allowed.”
Environmental factors
Mehra went on to explain that while AI can provide improved climate monitoring and predictions, it could also harm the environment as well, and regulators will need to be wary of this.
“AI stands to make a large impact on the environment, and with ethical AI in place, that can be a good thing,” he said. “As National Geographic reports, AI can make better climate predictions, show the effects of extreme weather and identify the source of carbon emissions.
“The problem is that AI engines require giant datasets, which means huge amounts of computing infrastructure that consume enormous amounts of power.
“This can have significant carbon footprints of their own. The enormity of this issue is illustrated by the fact that data centres consume at least 3% of all the electricity generated on the planet.
“This will need to be taken into consideration when companies develop practices that help monitor their AI footprint, and the first step is on governments to implement the rules around AI and its use.”
Human bias
Another issue that needs to be considered is the potential for the machine learning that AI needs to undergo to be swayed by human viewpoints rather than being objective.
AI bias: It is the responsibility of humans to ensure fairness
“AI is moving at such a pace that we need to regulate it before it gets out of control,” said Jake Moore, cyber security specialist at ESET. “However, one of the major issues is that when creating AI there is a huge input by humans, and humans are naturally prone to having all sorts of bias.
“Machine learning, which is at the heart of AI, includes external factors such as opinions and feelings, which all help influence the decision makers.
“AI and machine learning data is only as good as the data you feed it, so the regulation of this needs to start at the earliest stages. There will no doubt be teething issues.”
The new EU regulations on AI are expected to be brought into motion later in 2020.