In futurism, a technological singularity is a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict. This prognosis is based on statistical data showing the acceleration of various trends in the human civilisation. The idea of technological singularity has been first discussed in the 1950s, and vastly popularised in the 1980s by Vernor Vinge. The prognosed date for the singularity occurence differ from several years from now to centuries later, with the third decade of the 21st century being the most common date. However, as any prediction, whether a singularity will actually ever occur is uncertain and some argue that that it may never happen.
Since the term technological singularity refers both to the advance of technology and their impact on the human society, it can be also understood as a sociological singularity. The technological singularity is closely related to the other singularities. Its acceleration and mathematical model is similar to the mathematical singularity, a point where a mathematical function goes to infinity. Its implications for our society are metaphorically similar to the gravitational singularity in the astrophysical models like a black holes, where no information can reach an obsever located beyond the event horizon.
More specifically, the technological singularity can refer to the advent of smarter-than-human intelligence (human or artificial), and the cascading technological progress (in nanotechnology and other areas) assumed to follow.
Although commonly believed to have originated within the last two decades of the 20th century, the concept of a technological singularity actually dates back to the 1950s:
"One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." -- Stanislaw Ulam, May 1958, referring to a conversation with John von Neumann
This quote has been several times taken out of context and attributed to von Neumann himself, likely due to von Neumann's widespread fame and influence.
In 1965, statistician I. J. Good described a concept even more similar to today's meaning of singularity, in that it included in it the advent of superhuman intelligence:
- "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."
The Vingean Singularity
The concept of a technological singularity as it is known today is credited to mathematician and author Dr. Vernor Vinge. Vinge began speaking on the Singularity in the 1980s, and collected his thoughts into the first article on the topic in 1993, with the essay "Technological Singularity". Since then, it has been the subject of many futurist and science fiction stories/writings.
Vinge's essay contains the oft-quoted statement that "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."
Vinge's singularity is commonly misunderstood to mean technological progress will rise to infinity, as happens in a mathematical singularity. Actually, the term was chosen as a metaphor from physics rather than mathematics: as one approaches the Singularity, models of the future become less reliable, just as conventional models of physics break down as one approaches a gravitational singularity.
Vinge writes that superhuman intelligences, whether created by cybernetically enhancing human minds or through artificial intelligence, will be even more able to enhance their own minds than the human intelligences that created them. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence is expected to cause large amounts of technological progress within a short period of time.
The Singularity is often seen as the end of human civilization and the birth of a new one. In his essay, Vinge asks why the human era should end, and argues that humans will be transformed during the Singularity to a higher form of intelligence. After the creation of a superhuman intelligence, according to Vinge, people will necessarily be a lower lifeform in comparison.
Kurzweil's Law of Accelerating Returns
In his essay, The Law of Accelerating Returns, Ray Kurzweil proposes a generalization of Moore's law that forms the basis of many people's beliefs regarding the Singularity. Moore's law describes an exponential growth pattern in the complexity of integrated semiconductor circuits. Kurzweil extends this to include technologies from far before the integrated circuit to future forms of computation. He believes that the exponential growth of Moore's law will continue beyond the use of integrated circuits into technologies that will lead to the Singularity.
The law described by Ray Kurzweil has in many ways altered the public's perception of Moore's law. It is a common (but mistaken) belief that Moore's law makes predictions regarding all forms of technology, when really it only concerns semiconductor circuits. Many futurists still use the term "Moore's law" to describe ideas like those put forth by Kurzweil.
Ray Kurzweil in his own words: "An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won't experience 100 years of progress in the 21st century -- it will be more like 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity -- technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light." 
Futurists have speculated on a wide range of possible technologies that might play a role in bringing about the Singularity. The order of the arrival of these technologies is often disputed, and of course some will expedite the invention of others, some are dependent on the invention of others, etc. There exist many disputes between the predictions of various futurists, but the following shows some of the most common themes among them.
An artificial intelligence capable of recursively improving itself beyond human intelligence, known as a seed AI, if possible, would likely cause a technological singularity. Only one such AI, many believe, would be needed to bring about the Singularity. Most Singularitarians believe the creation of seed AI is the most likely means by which humanity will reach the Singularity. Much of the work of Singularity Institute is built upon this belief.
The potential dangers of molecular nanotechnology are widely known even outside of futurist and transhumanist communities, and many Singularitarians consider human-controlled nanotechnology to be one of the most significant existential risks facing humanity (refer to the concept of Grey Goo for an example of how this risk could become reality). For this reason, they often believe that nanotechnology should be preceded by seed AI, and that nanotechnology should remain unavailable to pre-Singularity society.
Others, such as the Foresight Institute, advocate efforts to create molecular nanotechnology, believing that nanotechnology can be made safe for pre-Singularity use or can expedite the arrival of a beneficial singularity.
Although seed AI and nanotechnology are widely regarded as the technologies most likely to bring about the Singularity, others have speculated about the possibility of other advanced technologies arriving before the Singularity. Perhaps most famously, Ray Kurzweil speculates extensively in his book The Age of Spiritual Machines on technologies he believes will arrive in the twenty-first century, predicting a slower, more gradual ascent of technological advancement to lead to a singularity.
Some believe Direct brain-computer interfaces may potentially improve an individual's memory, computational capacity, communication abilities, and knowledge base. More traditional human-computer interfaces may also be seen as intelligence augmenting improvements: traditional expert systems, computer systems recognizing and predicting human patterns of behavior, speech and handwriting recognition software, etc. Intelligence enhancement through novel chemical drugs and genetic engineering (beyond that which is provided by modern nootropics) may also soon be possible. Newborn babies may be given genetic intelligence enhancements as well (see genetic engineering).
Mind uploading, for example, is a proposed alternative means of creating artificial intelligence -- instead of programming an intelligence, it would instead be bootstrapped by an existing human intelligence. The levels of technology needed to scan the human brain at the resolutions needed for a mind upload makes mind uploading in the pre-Singularity world seem unlikely, however. The amount of raw computer processing power and understanding of cognitive science needed is also substantial.
Others, such as George Dyson in Darwin Among the Machines, have speculated that a sufficiently complex computer network (e.g.: a globally-connected high-bandwidth wireless communication fabric; and a global cesspool of virus/worm-infected, networked computers) may produce "swarm intelligence". AI researchers may use the improved computing resources of the future to create artificial neural networks so large and powerful they become generally intelligent. Advocates of Friendly AI see this as "brute-forcing" the problem of creating AI, and likely to produce unacceptably immoral and dangerous forms of artificial intelligence.
Singularity speculations often concern the theoretical limits of computation power. Some researchers claim that even without quantum computing, using advanced nanotechnology, matter could be engineered to have unimaginably vast computational capacities. Such material is referred to as computronium among futurists. Some speculate that entire planets or stars may be converted into computronium, creating "Jupiter Brains" and "Matrioshka Brains" respectively.
There exist two main types of criticisms of Singularity speculation: those questioning whether the Singularity is likely or even possible, and those questioning whether it is safe or desirable.
The likelihood and possibility of the Singularity
Some do not believe a technological singularity is likely to occur. Noting the parallels between the narrative structure of the Singularity and the modern Dispensationalist Christian Rapture belief, Ken MacLeod has notably analyzed the Singularity as "the Rapture for nerds". This linkage seems to be supported by an existing correlation between the atypical popularity of Dispensationalism and Technological Determinism within the United States, as compared with many other developed nations. It may be that current popular versions of the Technological Singularity may be nothing more than culturally specific remixes of the standard, enduring eschaton myth dressed in science fiction tropes.
Most Singularity speculation assumes the possibility of human-equivalent artificial intelligence. It is controversial that creating such AI is possible. Many believe practical advances in artificial intelligence research have not yet empirically demonstrated this. See the article artificial intelligence for further debate.
Some dispute that the rate of technological progress is increasing. The exponential growth of technological progress may become linear or inflected or may begin to flatten into a limited growth curve. In this model, instead of an overall acceleration of progress, technological advance jumps forward whenever there is a human "buy in" and stalls whenever there isn't a benefit large enough to profit the technologists, and therefore never gets steep enough to be considered a singularity.
Examples of large human "buy ins" into technology include the computer revolution, as well as massive government projects like the Manhattan Project and the Human Genome Project. The foundation organizing the Methuselah Mouse Prize believes aging research could be the subject of such a massive project if substantial progress is made in slowing or reversing cellular aging in mice.
The desirability and safety of the Singularity
It has been often speculated, in science fiction and elsewhere, that advanced AI is likely to have goals inconsistent with those of humanity and may threaten humanity's existence. It is conceivable, if not likely, that superintelligent AI will simply eliminate the intellectually inferior human race, and humans will be powerless to stop it. This is a major issue concerning both Singularity advocates and critics, and was the subject of an article by Bill Joy appearing in Wired Magazine, ominously titled Why the future doesn't need us.
Some critics argue that advanced technologies are simply too dangerous for us to morally allow the Singularity to occur, and advocate efforts to actually stop its arrival. Perhaps the most famous activist for this viewpoint is Theodore Kaczynski, the Unabomber, who wrote in his "manifesto" that AI might enable the upper classes of society to "simply decide to exterminate the mass of humanity". Alternatively, if AI is not created, Kaczynski argues that humans "will have been reduced to the status of domestic animals" after sufficient technological progress has been made. Portions of Kaczynski's writings have been included in both Bill Joy's article and in a recent book by Ray Kurzweil. It should be noted that Kaczynski not only opposes the Singularity, but is a Luddite, and many people oppose the Singularity without opposing present-day technology as Luddites do.
Naturally, scenarios such as those described by Kaczynski are regarded as undesirable to advocates of the Singularity as well. Many Singularity advocates, however, do not feel they are so likely, and are more optimistic about the future of technology. Others believe that, regardless of the dangers the Singularity poses, it is simply unavoidable -- we must progress technologically because there is just no other path to take.
Advocates of Friendly artificial intelligence, and specifically SIAI, acknowledge that the Singularity is potentially very dangerous and work to make it safer by creating seed AI that will act benevolently towards humans and eliminate existential risks. This idea is also embodied in Asimov's Three Laws of Robotics, which logically prevent an artificially intelligent robot from acting malevolently towards humans. However, in one of Asimov's novels, despite these laws, robots end up causing harm to individual human beings which brings about the formulation of the Zeroth Law. The theoretical framework of Friendly AI is currently being designed by Singularitarian Eliezer Yudkowsky.
Another viewpoint, although a much less common one, is that AI will eventually dominate or destroy the human race, and that this scenario is desirable. Dr. Prof. Hugo de Garis is most notable for his support of this opinion.
The Singularity in fiction and modern culture
In addition to the Vernor Vinge stories that pioneered Singularity ideas, several other science fiction authors have written stories that involve the Singularity as a central theme. Notable authors include Charles Stross and Greg Egan. Singularity themes are common in cyberpunk novels, one of the most famous examples being the recursively self-improving AI Neuromancer from William Gibson's novel of the same name.
Some earlier science fiction works such as Isaac Asimov's The Last Question and John W. Campbell's The Last Evolution feature technological singularities.
Orion's Arm an online science fiction world-building project, also features several technological singularities as part of its premise.
The computer game Sid Meier's Alpha Centauri also features something akin to a singularity, called the 'Ascent to Transcendence', as a major theme within it.
The Singularity Institute for Artificial Intelligence (SIAI), an educational and research nonprofit, was created to work toward safe cognitive enhancement (i.e. a beneficial singularity). They emphasize Friendly Artificial Intelligence, as they believe general-purpose AI is more likely to enhance cognition substantially before human intelligence can be significantly enhanced by neurotechnologies or somatic gene therapy.
The Acceleration Studies Foundation (ASF), an educational nonprofit, was formed to attract broad business, scientific, technological, and humanist interest in acceleration and evolutionary development studies. They produce Accelerating Change , an annual conference on multidisciplinary insights in accelerating technological change at Stanford University, and maintain Acceleration Watch, an educational site discussing accelerating technological change.
Last updated: 08-19-2005 02:00:44