Artificial intelligence
Artificial intelligence, commonly abbreviated as AI, also known as machine intelligence, was defined as "making a machine behave in ways that would be called intelligent if a human were so behaving" by John McCarthy in his 1955 Proposal for the Dartmouth Summer Research Project On Artificial Intelligence, which first introduced the term. To date, much of the work in this field has been done with computer simulations of intelligence based on predefined sets of rules.
Much of the (original) focus of artificial intelligence research draws from an experimental approach to psychology, and emphasizes what may be called linguistic intelligence (best exemplified in the Turing test).
Approaches to artificial intelligence that do not focus on linguistic intelligence include robotics and collective intelligence approaches, which focus on active manipulation of an environment, or consensus decision making, and draw from biology and political science when seeking models of how "intelligent" behavior is organized.
Artificial intelligence theory also draws from animal studies, in particular with insects, which are easier to emulate as robots (see artificial life), or with apes, who resemble humans in many ways but have less developed capacities for planning and cognition. And so, those who pursue this line of research argue, ought to be considerably easier to mimic.
Seminal papers advancing the concept of machine intelligence include A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), by Warren McCulloch and Walter Pitts, and On Computing Machinery and Intelligence (1950), by Alan Turing. See cybernetics and Turing Test for further discussion.
There were also early papers which denied the possibility of machine intelligence on logical or philosophical grounds such as Minds, Machines and Gödel (1961) by John Lucas. Over time, debates have tended to focus less and less on "possibility" and more on "desirability", as emphasized in the "Cosmist" (versus "Terran") debates initiated by Hugo De Garis and Kevin Warwick. A Cosmist, according to de Garis, is actually seeking to build more intelligent successors to the human species. The emergence of this debate suggests that desirability questions may also have influenced some of the early thinkers "against". The vision of artificial intelligence replacing human professional judgment has arisen many times in the history of the field, in science fiction, and today in some specialized areas where "expert systems" are used to augment or to replace professional judgment in some areas of engineering and of medicine.
Artificial intelligence began as a field in the 1950s with such pioneers as Allen Newell and Herbert Simon, who founded the first artificial intelligence laboratory at Carnegie-Mellon University, and McCarthy and Minsky, who founded the MIT AI Lab in 1959. They all attended the aforementioned Dartmouth College summer AI conference in 1956, which was organized by McCarthy, Minsky, and Nathan Rochester of IBM.
Historically, there are two broad styles of AI research - the "neats" and "scruffies". "Neat" AI research, in general, involves symbolic manipulation of abstract concepts, and is the methodology used in most expert systems. Parallel to this are the "scruffy" approaches, of which neural networks are the best-known example, which try to "evolve" intelligence through building systems and then improving them through some automatic process rather than systematically *designing* something to complete the task. Both approaches appeared very early in AI history. Throughout the 1960's and 1970's scruffy approaches were pushed to the background, but interest was regained in the 1980's when the limitations of the "neat" approaches of the time became clearer. However, it has become clear that contemporary methods using both broad aproaches have severe limitations.
Whilst progress towards the ultimate goal of human-like intelligence has been slow, many spinoffs have come in the process. Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as McCarthy, Minsky, Seymour Papert (who developed Logo there), Terry Winograd (who abandoned AI after developing SHRDLU).
Many other useful systems have been built using technologies that at least once were active areas of AI research. Some examples include:
- Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous match in 1997.
- Fuzzy logic, a technique for reasoning under uncertainty, has been widely used in industrial control systems.
- Expert systems have been widely deployed industrially.
- Machine translation systems such as SYSTRAN are widely used, although results are not yet comparable with human translators.
- Neural networks have been used for a wide variety of tasks, from intrusion detection systems to computer games.
- Optical character recognition systems can translate arbitrary typewritten European script into text.
- Handwriting recognition is used in millions of personal digital assitants.
- speech recognition is commercially available and is widely deployed.
- computer algebra systems, such as Mathematica and Macsyma, are commonplace.
- machine vision systems are used in many industrial applications.
Fields in AI
- Knowledge representation
- Machine learning
- Machine planning
- Neural network
- Expert system
- Genetic programming
- Genetic algorithm
- Natural language processing
- Computer vision
- Robotics
Apparent 'Artificial intelligence' programs:
Artificial intelligence in literature:
- HAL 9000 in 2001 A Space Odyssey
- A.I.
See also Artificial intelligence projects, computer science, cognitive science, consciousness. Searle's Chinese room, semantics, The Singularity, collective intelligence, cybernetics. psychology.
Loebner Prize website at: http://www.loebner.net/Prizef/loebner-prize.html
AI: Artificial Intelligence is also the name of a 2001 movie which was originally storyboarded by Stanley Kubrick, who intended to direct it himself once he felt special effects were advanced enough that he could make it look convincing. Kubrick was slated to direct the film after Eyes Wide Shut but died first; Steven Spielberg directed it instead.