Stephen Hawking has warned that artificial intelligence (AI) could be the greatest disaster in human history, unless humans learn to mitigate the risks posed.
Of these looming threats, Hawking suggested the rise of AI could lead to the creation devastating autonomous weapons and new oppressive methods of controlling the masses.
Perhaps the most distressing point from Hawking’s speech was his notion that machines could develop a will of their own.
To this, a Terminator-like scenario is not inconceivable. Humans make autonomous weapons for the next stage of combat, a global autonomous arms race beings, the machines learn to think, humans get wiped out.
This may sound exaggerated, but it does mimic to some extent the speech Hawking delivered, if AI’s advancement goes unchecked.
“AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”
“It will bring great disruption to our economy.”
“And in the future, AI could develop a will of its own – a will that is in conflict with ours.”
>See also: The rise of the machine: AI, the future of security
“I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.”
“In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”
Hawking’s sentiments echo a number of high profile technology leaders.
Elon Musk believes AI’s rise will be the biggest threat to the survival of the human race, while Bill Gates (Microsoft’s founder) and Steve Wozniak (Apple co-founder) have also expressed concerns about where the technology is going.
The Leverhulme Centre for the Future of Intelligence
Back in 2014, Hawking and a few others called for more research to be done in the area of AI and its future implications.
This brainchild was realised last night at the opening event for the centre: “I am very glad that someone was listening to me,” said Hawking.
The new centre is a collaboration between the University of Cambridge, the University of Oxford, Imperial College London, and the University of California, Berkeley.
It will bring together a multitude of disciplines: philosophers, lawyers, psychologists, and computer science to produce extensive research on the potential threat posed, while working closely with industry and policy makers.
“The research done by this centre is crucial to the future of our civilisation and of our species,” he said.
“It is a welcome change that people are studying instead the future of intelligence.”
>See also: From intellect to intuition: what happens when we humanise AI systems?
While the majority of Hawking’s short and well worded speech focused on the worst case scenarios, he did note the huge benefits AI could offer.
He made the case, that used correctly in a controlled, safe manner, artificial intelligence could eradicate poverty and disease. As well as, significantly, reverse the effects of industrialisation on the world.
It seems he believes AI and the future of humankind is on a tipping point between unimaginable potential and total destruction.
However…
Some experts aren’t as convinced at this either or scenario.
Frank Lansink, European CEO at IPsoft believes that “the potential for AI is massive… it has an integral role to play in the workplaces of the future. It will help unleash creativity, create new job activities and new occupations which combine ground-breaking technology with our most human skills.”
“The future workplace is a hybrid human-AI one, and one we believe will bring significant opportunities and benefits to society in the future.”
Lansink has no doubts that AI will only provide benefits for the workplace and indeed, wider society.
Like Hawking, however, he understands that as a society we “must proceed in a careful and well-prepared manner” in the build up to AI’s true arrival.