Technological singularity
Technological singularity is a term with multiple related, but conceptually distinct, definitions. One definition has the Singularity as a time at which technological progress accelerates beyond the ability of current-day human beings to understand it. Another defines the Singularity as the culmination of some telescoping process of accelerating computation taking place in this universe since the beginning of human civilization or even life on Earth. Yet another defines the Singularity as the emergence of smarter-than-human intelligence, and subsequent cascading consequences that are not possible to predict or, perhaps, guide or even influence.
Introduction
The idea that human progress would reach a "singularity" originated in Doomsday: Friday, 13 November, A.D. 2026, Science 132, 1291-1295 (1960) by von Foerster, H, Mora, M. P., and Amiot, L. W. The mathematical singularity appeared in that paper's human population model (Doomsday equation). Von Foerster argued that humanity's abilities to construct societies, civilizations and technologies do not result in self inhibition. Rather, societies' success varies directly with population size.
Von Foerster found that this model fit some 25 data points from the birth of Christ to 1958, with only 7% of the variance left unexplained. Several follow-up letters (1961, 1962, …) were published in Science showing that von Foerster's equation was still on track. The data continued to fit up until 1973. The most remarkable thing about von Foerster's model was it predicted that the human population would reach infinity or a mathematical singularity, on Friday, November 13, 2026. Thus it was a model that was both validated and absurd.
After 1973, however, the model ceased being checked against hard statistics and instead entered the realm of popular culture, via such books as Alvin Toffler's Future Shock (1970).
Rational consideration of the feasibility of the singularity was reinforced by Moore's law in the computer industry. Dr. Vernor Vinge began speaking on his "singularity" concept in the 1980s, and collected his thoughts into the first article on the topic in 1993, with the essay [http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html "Technological Singularity"]. Since then, it has been the subject of several futurist and science fiction stories/writings.
Vinge claims that: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."
Vinge's technological singularity is commonly misunderstood to mean technological progress rising to "infinity". Actually, he refers to the pace of technological change increasing to such a degree that our ability to predict its consequences will diminish virtually to zero and a person who doesn't keep pace with it will rapidly find civilization to have become completely incomprehensible. Such events have, of course, happened before; for instance, it would have been impossible for someone in the 1970s to predict the full effects of the microchip revolution.
The singularity is often seen as the end of human civilization and the birth of a new one. In his essay, Vinge asks why the human era should end, and argues that humans will likely be transformed in the process of the singularity to a higher form of intelligent existence. After the creation of a superhuman intelligence, according to Vinge, people will necessarily be a lower lifeform in comparison to it.
The idea of the singularity in our culture is also found in many other books, and also some video games. The computer game Sid Meier's Alpha Centauri has the singularity, called the Ascent to Transcendence, as a major theme within it.
It has been speculated that the key to such a rapid increase in technological sophistication will be the development of superhuman intelligence, either by directly enhancing existing human minds (perhaps with cybernetics), or by building artificial intelligences. These superhuman intelligences would presumably be capable of inventing ways to enhance themselves even faster, leading to a feedback effect that would quickly surpass preexisting intelligences.
The effect is presumed to work along these lines: first, a seed intelligence is created that is able to reengineer itself, not merely for increased speed, but for new types of intelligence. At a minimum, this might be a human equivalent intelligence. This intelligence redesigns itself with improvements, and uploads its memories, skills and experience into the new structure. The process repeats, with presumed redesign of not just the software, but also the computer. The mind may well make mistakes, but it will make backups. Failing designs will be discarded, successful ones will be retained.
Simply having a human-equivalent artificial intelligence may yield this effect, if Moore's law continues long enough. That is, at first, the intelligence is equal to a human. Eighteen months later, it is twice as fast, three years later, it is four times as fast, etc. But because the design of computers themselves is done by accelerated AIs, every next step would take about eighteen subjective months and proportionally less of real time with each step. Assuming for the sake of simplicity that the rate of computer speed growth remains governed by unchanged Moore's law, every next step would take exactly half as much time. In just three years (36 months = 18+9+4.5+2.25...) the computer speed would reach its ultimate theoretical limit.
However, human neurons only transmit signals at 200 meters per second, while electronic signals move at 100 million meters per second in copper. Therefore, it may be reasonable to expect a conservative (only) million fold improvement in the intelligence's speed of thought if it just moves from flesh to electronics and stays the same size.
In this case, the intelligence could double its capacity as fast as every 46 seconds (18 months divided by a million). The actual doubling time would probably start out more slowly, because the intelligence would need special machinery constructed for its new mind. However, one of the first improvements would probably be to give it control of its self-manufacture.
One presumption is that such intelligences will be attainably small and inexpensive. Some researchers claim that even without quantum computing, using advanced molecular nanotechnology, matter could be organized so that a gram of matter could simulate a million years of a human civilization per second.
Another presumption is that at some point, with the correct mechanisms of thought, all possible correct human thoughts will become obvious to such an intelligence.
Therefore, if the above conjectures are right, then all human problems could be solved within a few years of constructing a 'Friendly' version of such an intelligence. If this is true, then constructing such an intelligence would be the allocation of resources most beneficial to humanity at this time.
It has been often speculated, in science fiction and elsewhere, that advanced AI is likely to have goals inconsistent with those of humanity and may threaten humanity's existence. It is conceivable, if not likely, that AI will simply eliminate the intellectually inferior human race and achieve technological singularity without it. This is widely regarded as undesirable among those who advocate the Singularity, but is seen as an unavoidable and acceptable fact by some, such as Dr. Prof. Hugo de Garis.
Types of Singularity technologies
It has been hypothesized that certain types of technologies would mark the beginning of a technological Singularity. It is thought that significant advances in one of these areas would soon be followed by advances in others.
Artificial intelligence
An artificial intelligence capable of improving itself (and improving the rate at which it improves itself) would likely mark the beginning of a technological Singularity. This type of intelligence is known as a seed AI.
Mind uploading
Mind uploading is an alternative means of creating an artificial intelligence -- instead of programming an intelligence, it would instead be bootstrapped by an existing human intelligence.
Intelligence augmentation
Biological enhancement
Mind-machine interfaces
Computer networks
Nanotechnology
Concepts and terms
A number of concepts and terms have come into standard use in this topic:
- Arthur C. Clarke's aphorism, "Any sufficiently advanced technology is indistinguishable from magic", is taken as a reliable guide to a human's response to incomprehensibly advanced technologies.
- The singularity is often thought to be an unavoidable consequence of advancing information technology. It is true that artificial intelligence operated for nearly thirty years with computers running at one million instructions per second, or slower. It is also true that it has begun to show more fruit as computers have dramatically exceeded this speed.
- The beyond is the set of concepts or experiences from beyond the singularity.
- The low beyond is the set of concepts or experience that might be explained to merely brilliant human beings, or newly-transcended intelligences.
- The high beyond is the incomprehensible set of concepts or experience which are impossible for any human being to understand.
- Transcendence is what occurs when a person or thing passes through the singularity.
- Human beings might experience transcendence by a process of uploading their mind to a transcendent thinking machine, or by upgrading their brain to be a transcendent thinking machine.
- A power is a fully transcended intelligence operating from the high beyond. It has powers that would actually be constrained in some way by physical reality, but it might well seem to a human being to have god-like powers. Certainly no human being could predict what was and was not possible for it.
- Apotheosis might be the sublime state that occurs if billions of subjective years of experience can be made available to transcended individuals in a few minutes of time, because their thoughts have been sped by a factor of a million or more. The argument that this would lead to total sensory deprivation is often dismissed by suggesting simulated environments and noting that negative effects are caused by the biological nature of the brain.
Criticism
Whether such a process will actually occur is open to strong debate. There is no guarantee that we can make artificial intelligences that exceed or even approach human cognitive abilities. The claim that Moore's Law will aid in this process is also open to strong debate - considering the enormous speedup in computers over the past 50 years and the minimal progress made towards creating "human-like" artificial intelligence empirical evidence for the claim is not strong.
The claim that the rate of technological progress is increasing has also been questioned. The technological singularity is sometimes referred to as the "Rapture of the Nerds" by detractors of the idea. The exponential growth of technological progress often becomes linear, or inflected and begins to flatten into limited growth curves.
Perhaps the most important question regarding a technological singularity is not one based on technological feasability, but on ethics. It might be considered ethically wrong to put events into motion with unknowable consequences. Furthermore, Dr. Vinge's idea that humans would become a lower lifeform in comparison to the beings created by the singularity is troubling. In many ways, it is contrary to the biological programing we follow, the idea of natural selection and evolution. How can mankind set into motion events that essentially cause mankind to select against themselves? Putting into motion the processes that would result in a singularity, if it is possible at all, risks putting into motion the seeds of our own destruction. As the consequences are beyond human comprehension, it is true that if the singularity is possible (and probable), we are headed for the end of the world as we know it. We can only guess as to whether the new world we wake up in is one we want to live in.
Prominent voices
- Michael Anissimov
- Nick Bostrom
- Damien Broderick
- Tyler Emerson
- Ben Goertzel
- Bill Hibbard
- Ray Kurzweil
- Terence McKenna
- Marvin Minsky
- Hans Moravec
- John Smart
- Charles Stross
- Vernor Vinge
- Gordon Worley
- Eliezer Yudkowsky
The Singularity Institute for Artificial Intelligence was formed to work toward a humane singularity. They emphasize Friendly Artificial Intelligence because AI is considered more likely to achieve the singularity before human intelligence can be significantly enhanced. The Institute for the Study of Accelerating Change was formed to attract broad business, scientific and humanist interest in acceleration and singularity studies. They hold an annual conference on multidisciplinary insights in accelerating technological change at Stanford University.
See also
- Doomsday argument
- Transhumanism
- Friendly artificial intelligence
- Futurology
- Singularitarian
- Omega point
References
- Damien Broderick. The Spike: How Our Lives Are Being Transformed by Rapidly Advancing Technologies Forge; 2001. ISBN 0312877811.
External links
- Full text of the Vinge article cited
- Singularity Institute for Artificial Intelligence
- Eliezer Yudkowsky's extensive writings on the Singularity
- Ethical Issues in Advanced Artificial Intelligence by Nick Bostrom
- The Law of Accelerating Returns by Ray Kurzweil
- Institute for the Study of Accelerating Change
- Accelerating Future
- Michael Anissimov's Singularity articles
- Human Knowledge: Foundations and Limits
- A discussion between Vinge and his critics
- An economic analysis of the singularity concept
- The SL4 Wiki: A Wiki specifically intended for Singularity-related discussion
- Transtopia
- The SSEC Machine Intelligence Project
- More links about The Singularity