Strong AI
Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence.[1] Strong AI is also referred to as the ability to perform "general intelligent action",[2] or as "artificial general intelligence",[3] "artificial consciousness", "sentience", "sapience", "self-awareness" or "consciousness"[4] (although there are subtle differences in the use of each of these terms).
Some references classify artificial intelligence research into "strong AI, applied AI and cognitive simulation."[5] Applied AI (also called "narrow AI"[1] or "weak AI"[6]) refers to the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases, are completely outside of) the full range of human cognitive abilities.
History
Origin of the term
The term "strong AI" was adopted from the name of an argument in the philosophy of artificial intelligence first identified by John Searle in 1980.[7] He wanted to distinguish between two different hypotheses about artificial intelligence:[8]
- An artificial intelligence system can think and have a mind.[9]
- An artificial intelligence system can (only) act like it thinks and has a mind.
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage, which is fundamentally different than the subject of this article, is common in academic AI research and textbooks.[10]
The term "strong AI" is now used to describe any artificial intelligence system that acts like it has a mind,[1] regardless of whether a philosopher would be able to determine if it actually has a mind or not. As Russell and Norvig write: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[11]
AI researchers are interested in a related statement (that some sources confusingly call "the strong AI hypothesis"):[12]
- An artificial intelligence system can think (or act like it thinks) as well or better than people do.
This assertion, which hinges on the breadth and power of machine intelligence, is the subject of this article.
Strong AI research
Modern AI research began in the middle 50s.[13] The first generation of AI researchers were convinced that strong AI was possible and that it would exist in just a few decades. As AI pioneer Herbert Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."[14] Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who accurately embodied what AI researchers believed they could create.
However, in the early 70s, became obvious that researchers had grossly underestimated the difficulty of the project. The agencies that funded AI became skeptical of strong AI and put researchers under increasing pressure to produce useful technology, or "applied AI". By 1974, funding for AI projects was hard find.[15]
As the eighties began, Japan's fifth generation computer project revived interest in strong AI, setting out a ten year timeline that included strong AI goals like "carry on a casual conversation".[16] In response to this and the success of expert systems, both industry and government pumped money back into the field.[17] However, the market for AI spectacularly collapsed in the late 80s and the goals of the fifth generation computer project were never fulfilled.[18] For the second time in 20 years, AI researchers who had predicted the immanent arrival of strong AI had been shown to fundamentally mistaken about what they could accomplish.
By the 1990s, AI researchers had gained a reputation for making promises they could not keep. Many AI researchers today are reluctant to make any kind of prediction at all[19] and avoid any mention of "human level" artificial intelligence, for fear of being labeled a "wild-eyed dreamer."[20] For the most part, researchers today choose to focus on specific sub-problems where they can produce verifiable results and commercial applications, such as neural nets, computer vision or data mining,[21] and most believe that these sub-problems must be solved before machines with strong AI can exist.[22] Interest in direct research into strong AI tends to come from outside the field, from internet entrepreneurs (such as Jeff Hawkins) or from futurists such as Ray Kurzweil.
Defining strong AI
Template:Expand-section Template:Expert
A computer enters the framework of strong AI if a machine approaches or supersedes human intelligence, if it can do typically human tasks, if it can apply a wide range of background knowledge and has some degree of self-consciousness. John McCarthy stated in his work What is AI? that we still do not have a solid definition of intelligence. Human-bound definitions of measurable intelligence, like IQ, cannot easily be applied to machine intelligence.
The most famous definition of AI was the operational one proposed by Alan Turing in his "Turing test" proposal. There have been very few attempts to create such definition since (some of them are in the AI Project)
A proposal to define a more easily quantifiable measure of artificial intelligence is:
Intelligence is the possession of a model of reality and the ability to use this model to conceive and plan actions and to predict their outcomes. The higher the complexity and precision of the model, the plans, and the predictions, and the less time needed, the higher is the intelligence.[1]
Research approaches
Artificial general intelligence
Artificial General Intelligence research aims to create AI that can replicate human-level intelligence completely, often called an Artificial General Intelligence (AGI) to distinguish from less ambitious AI projects. As yet, researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. Some small groups of computer scientists are doing AGI research, however. Organizations pursuing AGI include the Adaptive AI, Artificial General Intelligence Research Institute (AGIRI) and the Singularity Institute for Artificial Intelligence. One recent addition is Numenta, a project based on the theories of Jeff Hawkins, the creator of the Palm Pilot. While Numenta takes a computational approach to general intelligence, Hawkins is also the founder of the RedWood Neuroscience Institute, which explores conscious thought from a biological perspective.
Simulated human brain model
This is seen by manyTemplate:Who as the quickest means of achieving Strong AI, as it doesn't require complete understanding of how intelligence works. Basically, a very powerful computer would simulate a human brain, often in the form of a network of neurons. For example, given a map of all (or most) of the neurons in a functional human brain, and a good understanding of how a single neuron works, it would be possible for a computer program to simulate the working brain over time. Given some method of communication, this simulated brain might then be shown to be fully intelligent. The exact form of the simulation varies: instead of neurons, a simulation might use groups of neurons, or alternatively, individual molecules might be simulated. It's also unclear which portions of the human brain would need to be modeled: humans can still function while missing portions of their brains, and areas of the brain are associated with activities (such as breathing) that might not be necessary to think.[citation needed]
This approach would require three things:
- Hardware. An extremely powerful computer would be required for such a model. Futurist Ray Kurzweil estimates 10 million MIPS, or ten petaflops. At least one special-purpose petaflops computer has already been built (the Riken MDGRAPE-3) and there are nine current computing projects (such as BlueGene/P) to build more general purpose petaflops computers all of which should be completed by 2008, if not sooner.[2] Most other attempted estimates of the brain's computational power equivalent have been rather higher, ranging from 100 million MIPS to 100 billion MIPS. Furthermore, the overhead introduced by the modeling of the biological details of neural behaviour might require a simulator to have access to computational power much greater than that of the brain itself.
- Software. Software to simulate the function of a brain would be required. This assumes that the human mind is the central nervous system and is governed by physical laws. Constructing the simulation would require a great deal of knowledge about the physical and functional operation of the human brain, and might require detailed information about a particular human brain's structure. Information would be required both of the function of different types of neurons, and of how they are connected. Note that the particular form of the software dictates the hardware necessary to run it. For example, an extremely detailed simulation including molecules or small groups of molecules would require enormously more processing power than a simulation that models neurons using a simple equation, and a more accurate model of a neuron would be expected to be much more expensive computationally than a simple model. The more neurons in the simulation, the more processing power it would require.
- Understanding. Finally, it requires sufficient understanding thereof to be able to model it mathematically. This could be done either by understanding the central nervous system, or by mapping and copying it. Neuroimaging technologies are improving rapidly, and Kurzweil predicts that a map of sufficient quality will become available on a similar timescale to the required computing power. However, the simulation would also have to capture the detailed cellular behaviour of neurons and glial cells, presently only understood in the broadest of outlines.
Once such a model is built, it will be easily altered and thus open to trial and error experimentation. This is likely to lead to huge advances in understanding, allowing the model's intelligence to be improved/motivations altered.[dubious ]
The Blue Brain project aims to use one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to simulate a single neocortical column consisting of approximately 60,000 neurons and 5km of interconnecting synapses. The eventual goal of the project is to use supercomputers to simulate an entire brain.
The brain gets its power from performing many parallel operations, a standard computer from performing operations very quickly.
The human brain has roughly 100 billion neurons operating simultaneously, connected by roughly 100 trillion synapses[23]. By comparison, a modern computer microprocessor uses only 1.7 billion transistors[3]. Although estimates of the brain's processing power put it at around 1014 neuron updates per second,[24] it is expected that the first unoptimized simulations of a human brain will require a computer capable of 1018 FLOPS. By comparison a general purpose CPU (circa 2006) operates at a few GFLOPS (109 FLOPS). (each FLOP may require as many as 20,000 logic operations).
However, a neuron is estimated to spike 200 times per second (this giving an upper limit on the number of operations).[citation needed] Signals between them are transmitted at a maximum speed of 150 meters per second. A modern 2GHz processor operates at 2 billion cycles per second, or 10,000,000 times faster than a human neuron, and signals in electronic computers travel at roughly half the speed of light; faster than signals in human by a factor of 1,000,000.[citation needed] The brain consumes about 20W of power where supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5x1020 op/sec/watt at room temperature)
Neuro-silicon interfaces have also been proposed [4] [5].
Critics of this approachTemplate:Who believe it's possible to achieve AI directly without imitating nature and often use the analogy that early attempts to construct flying machines modeled them after birds, but modern aircraft do not look like birds.[citation needed]
Artificial consciousness research
Emergence
SomeTemplate:Who have suggested that intelligence can arise as an emergent quality from the convergence of random, man-made technologies. Human sentience—or any other biological and naturally occurring intelligence—arises out of the natural process of species evolution and an individual's experiences. Discussion of this eventuality is currently limited to fiction and theory.[citation needed]OR
See also
- History of artificial intelligence
- Technological singularity aka "The Singularity"
- Singularity Institute for Artificial Intelligence
External links
- Expanding Frontiers of Humanoid Robots
- AI lectures from Tokyo hosted by Rolf Pfeifer
- Artificial General Intelligence Research Institute
- The Genesis Group at MIT's CSAIL — Modern research on the computations that underlay human intelligence
- Essentials of general intelligence, article at Adaptive AI.
- Problems with Thinking Robots
- www.cs.utoronto.ca
- www.otterbein.edu lecture notes
- www.senapps.com
- www.eng.warwick.ac.uk
- www.thetechzone.com, Article - Looks at new modelling ideas such as fuzzy logic and quantum inspired parallel distributed networks.
Notes
- ↑ 1.0 1.1 1.2 Template:Harv or see Advanced Human Intelligence
- ↑ Newell & Simon 1963. This the term they use for "human-level" intelligence in the physical symbol system hypothesis.
- ↑ Voss 2006
- ↑ These terms are not used here in their standard definitions, as understood by psychology, neuroscience or cognitive science, but as place-markers for a term that describes the essential property of human intelligence required by strong AI.
- ↑ Encyclopedia Britannica Strong AI, applied AI, and cognitive simulation or Jack Copeland What is artificial intelligence? on AlanTuring.net
- ↑ The Open Unversity on Strong and Weak AI
- ↑ Searle 1980
- ↑ As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." Template:Harv
- ↑ The word "mind" is has a specific meaning for philosophers, as used in the mind body problem or the philosophy of mind
- ↑ Among the many sources that use the term in this way are: Russell & Norvig 2003, Oxford University Press Dictionary of Psychology (quoted in "High Beam Encyclopedia"), MIT Encyclopedia of Cognitive Science (quoted in "AITopics"), Planet Math, Arguments against Strong AI (Raymond J. Mooney, University of Texas), Artificial Intelligence (Rob Kremer, University of Calgary), Minds, Math, and Machines: Penrose's thesis on consciousness (Rob Craigen, University of Manitoba), The Science and Philosophy of Consciousness Alex Green, Philosophy & AI Bernard, Will Biological Computers Enable Artificially Intelligent Machines to Become Persons? Anthony Tongen, and the Usenet FAQ on Strong AI
- ↑ Russell & Norvig, p. 947
- ↑ A few sources where "strong AI hypothesis" is used this way: Strong AI Thesis, Neuroscience and the Soul
- ↑ Crevier 1993, p. 48-50
- ↑ Simon 1965, p. 96 quoted in Crevier 1993, p. 109
- ↑ The Lighthill report specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. Template:Harv Template:Harv In the U.S., DARPA became determined to fund only "mission-oriented direct research, rather than basic undirected research". See Template:Harv under "Shift to Applied Research Increases Investment". See also Template:Harv and Template:Harv
- ↑ Crevier 1993, pp. 211, Russell & Norvig 2003, p. 24 and see also Feigenbaum & McCorduck 1983
- ↑ Crevier 1993, pp. 161-162,197-203,240, Russell & Norvig 2003, p. 25, NRC 1999 under "Shift to Applied Research Increases Investment"
- ↑ Crevier 1993, pp. 209-212
- ↑ As AI founder John McCarthy wrote in his Reply to Lighthill, "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case."
- ↑ "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers." Markoff, John (2005-10-14). "Behind Artificial Intelligence, a Squadron of Bright Real People". The New York Times. Retrieved 2007-07-30.
- ↑ Russell & Norvig 2003, pp. 25-26
- ↑ Hans Moravec wrote in 1988 "I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts." Template:Harv
- ↑ "nervous system, human." Encyclopædia Britannica. 9 Jan. 2007
- ↑ Russell & Norvig 2003
References
- Template:Crevier 1993
- Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World, Michael Joseph, ISBN 0-7181-2401-4
- Howe, J. (November 1994), Artificial Intelligence at Edinburgh University : a Perspective Unknown parameter
|retrieval-date=
ignored (help) - Kurzweil, Ray (2005), The Singularity is Near, Viking Press
- Lighthill, Professor Sir James (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council
- Moravec, Hans (1976), The Role of Raw Power in Intelligence
- Moravec, Hans (1988), Mind Children, Harvard University Press
- Newell, Allen; Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, E.A.; Feldman, J., Computers and Thought, McGraw-Hill Unknown parameter
|publisher-place=
ignored (help) - NRC (1999), "Developments in Artificial Intelligence", Funding a Revolution: Government Support for Computing Research, National Academy Press, retrieved 30 August 2007
- Template:Russell Norvig 2003
- Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences, 3 (3): 417–457
- Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row
- Voss, Peter (2006), Goertzel, Ben; Pennachin, Cassio, eds., Essentials of general intelligence Artificial General Intelligence Check
|url=
value (help), Springer, ISBN 3-540-23733-X
- Pages with broken file links
- All articles with unsourced statements
- Articles with unsourced statements from September 2007
- Articles with invalid date parameter in template
- All accuracy disputes
- Articles with disputed statements
- Articles with unsourced statements from February 2007
- Articles with unsourced statements from October 2007
- Pages with citations using unsupported parameters
- Pages with URL errors
- Artificial intelligence