This is only foretaste of what is to come, and only the shadow of what is to be…it may take years before we settle down to the new possibilities, but I do not see why it should not enter any one of the fields normally covered by human intellect, and eventually compute on equal terms.
June 11th 1949, The Times quoting Alan Turing
Sometimes geniuses can sound like boffins. As geniuses go, they don’t get much more impressive than Alan Turing, the man who played such a crucial role in World War II and invented computer science, that’s all. But when he found himself on the wrong end of a telephone call from the press, the genius applied almost the precise opposite to high intellect, as he tried to negotiate the world of PR.
As a general rule, of thumb, newspapers are not so interested in news on machines that could solve a problem of prime numbers – something which the machine Turning had been working had managed to do. The Times was more interested in your everyday sort of usage, such as, say, a computer that could write a sonnet. And since he was the only one who was in the office when the esteemed newspaper rang, Turing took the call. “If a computer did write a sonnet then it would be the kind of poetry that would be more readily appreciated by another computer,” he said. The Times headlined: “Calculus to Sonnet” and Alan Turing, posthumous superstar in waiting, nearly got fired for that. Or so said Sir John Dermot Turing, nephew to the great man, when he was regaling an audience of AI interested delegates at the recent at the recent conference “This AI Life” organised by the IET, law firm Cooley, and Future Intelligence.”
Artificial Intelligence — what CTOs and co need to know
But Sir John Dermot Turing, a member of the European Post-Trade Forum, and a Trustee of the Turing Trust, and who has a DPhil in Genetics from New College, Oxford, also had a warning; one that perhaps CTOs, and indeed any IT expert, needs to pay heed to — because, as leaders in business who also hold expertise, maybe they have responsibility.
Alan Turing’s nephew was talking about that conundrum, the one that goes back to the inception of the computer industry, ‘might computers think, one day?’ Intriguingly, it’s one that seems no closer to an answer today than in Alan Turing’s time. The trouble is, the time gap between not knowing and it being too late to do anything about it, might be way to short for us to react.
Increasing the adoption of ethics in artificial intelligence
An evolutionary idea
Sir John draws a parallel with another famous man from yesteryear: Charles Darwin, himself. As it happens, the grandson of the man who wrote the Origin of Species, also called Charles Darwin was Alan Turing’s boss, for much of his career.
But the Charles Darwin, the one that revolutionised our understanding of the world and our place in it, really produced a unified theory. Before Darwin, the number of natural history enthusiasts, and scientists working in the areas that interested Darwin were legion. Before Darwin, for example, there was the French naturalist, Jean-Baptiste Lamarck, who believed that species developed their unique appearances via acquired characteristics. Animals that went around stretching their necks to reach high hanging fruit or leaves, may eventually get long necks, and their descendants even longer necks, leading to giraffes.
The digitised wills of Alan Turing and Charles Darwin reveal a surprising similarity in their final wishes
Another predecessor of Darwin, the geologist Charles Lyell, proposed that the Earth was much older than had previously been assumed and that countless and continuing small changes had shaped the geological layout of the Earth over vast periods of time.
Darwin took these disparate ideas of eminent thinkers such as Lamarck and Lyell as well as countless enthusiasts to create his theory, complete with its all its stunning implications.
An analogy with AI
Sir John wonders whether there might be an analogy with AI. “What AI is all about is simple self-contained things, like facial recognition, self-driving cars, voice recognition, algorithms for helping Amazon sell products — self-contained boxes. For as long as AI exists like this, in disparate groups, there is no risk that AI could escape from their boxes, and take over the planet.”
But, “I don’t think we are that far away from when someone comes along with a super algorithm that glues together all the bits of thinking and we end up with a super intelligent algorithm.
“It seems to me that we underestimate our own capabilities, someone clever is going to come up with something and we need to be prepared.”
The snag may lie with academia.
Artificial Intelligence: The fourth industrial revolution
He cites as an example the premier university introductory text book on AI, (Artificial Intelligence
A Modern Approach, Third Edition) which is used in 1,300 universities and 110 countries. Section 26.3, entitled ‘the ethics and risk of AI and developing AI’ is the only reference to this topic, and it simply quotes Isaac Asimov and his famous three laws of robotics.
Academics will respond to this debate by saying things like: “Yes, yes, you watch too much science fiction, it is very easy, you just pull the plug out.”
But, adds Sir John, “This isn’t going to work. Computers are housed on redundant distributive networks, and are thoroughly socialised by means of the internet. As soon as we have something that is able to communicate with other computers, is able to distribute itself around the world so it is not localised, there is no possibility of sticking it in a Faraday cage, pulling the plug out and doing all the other clever things. It has escaped in the instant that it is invented.
“We need to be ready for that, think about how we control this, and seize control over our own destiny, and how we shape the evolution of the thing that has yet to be invented.”
Alan Turing, at a talk in 1951 said: “At some stage, therefore, we should have to expect the machines to take control.”
He also said: “We can see only a short distance ahead, but we can see plenty that needs to be done.” — Computing Machinery and Intelligence, 1950, the very same paper that had the description of the now famous ‘Imitation game’ in it.”