While it may sound a bit too much like science fiction, Technological Singularity is a term used to describe the change that would occur when humans, technology and artificial intelligence would intersect to such an extent that we are incapable of comprehending or predicting what the new race would be like, and humans after the change would no longer be able to fully relate to the previous race.
Author Ray Kurzweil, a leading inventor and futurist who has made accurate predictions about technology in the past, describes The Singularity as 'an era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today – the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity.'
The future of technology and artificial intelligence is hard for anyone to predict, as technology is enhancing at such rapid rates today. Will we have the ability to create superhuman intelligence to the point that human era will end? Some mathematicians, technology experts, and computer science experts think it is a possibility.
The history and evolution of technological singularity theory
Over the course of time, Singularity has evolved and changed as various experts have made different interpretations. The term 'Singularity' in the sense of technology being responsible for a fundamental change in humans, was first used in 1958 by John von Neumann, a Hungarian mathematician and physicist.
> See also: Beyond the sci-fi scaremongering: the AI revolution will be by humans, for humans, and it's happening now
Stanislaw Ulam, also in 1958, imagined technology accelerating at such a rate that it would change human life to the point that life as we know it would forever change.
Statistician I.J. Good coined the term 'intelligence explosion,' rather than using the term 'Singularity.' Good envisioned 'a positive feedback cycle within which minds will make technology to improve on minds which once started will rapidly surge upwards and create super-intelligence.'
Good influenced mathematician, computer scientist, and science fiction author Vernor Vinge, who was the first to use 'Singularity' in a technological sense, in 1986. He cited various potential causes of Singularity, including artificial intelligence, human biological enhancement, or brain-computer interfaces.
Technological singularity considerations
Depending on your personal views of technology, you may think that computers already have replaced humans. Factories employ robots, smartphones communicate for us, computers control dangerous weapons, cars can drive themselves, and so on.
Yet, with all of this complexity of technology, these computers and machines continue to rely on human ingenuity and control. Humans program them, and they are not intuitive, so they cannot truly think or be self-aware.
Yet, as Jonathan Strickland points out in his article, Vernor Vinge warns that humans could 'evolve beyond our understanding through the use of technology,' and achieve Singularity.
In his essay, The Coming Technological Singularity: How to Survive in the Post-Human Era, Vinge predicts that superhuman intelligence will be developed prior to 2030.
He envisions this happening in one of four ways: scientists will develop advancements in artificial intelligence, computer networks may become self-aware, computer-human interfaces will become advanced enough that humans will evolve into a new species, or biological advancements will allow humans to physically engineer human intelligence.
Of his four scenarios, Vinge discusses the first in greatest detail in the essay. Strickland breaks down Vinge’s theory by relating it to Moore’s Law, 'which states that transistors double in power every 18 months.'
According to Vinge, at that rate, it’s inevitable that humans will build a machine that can think like a human. This takes care of the hardware aspect, but Strickland reminds us that software will need to be developed that allows machines 'to analyse data, make decisions, and act autonomously' if machines are truly going to begin to design and build better versions of themselves.
See also: Could Google's new patent push hamper the future of AI?
In this scenario, which may seem like a movie, humans would be taken out of the equation as superhuman intelligence takes over and we would reach Singularity.
It is Kurzweil, though, who is regarded as having the most plausible theory of Technological Singularity, which often is referred to as 'the accelerating change thesis.'
Kurzweil’s book The Singularity Is Near: When Humans Transcend Biology' defines Technological Singularity as 'a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.'
Sourced from ClickSoftware