The Chinese Room argument is a thought experiment and associated arguments designed by John Searle (Searle 1980) as a counter-argument to claims made by supporters of what Searle called strong artificial intelligence: the claim that, if a machine acts intelligently then it has a "mind", "understanding" and "conscious experience". The thought experiment is intended to also raise doubts about the philosophical positions of functionalism and the computational theory of mind, as well as the usefulness of the Turing test as a measure of intelligence. It is closely related to the philosophical questions known as the problem of other minds and the hard problem of consciousness.
Searle's argument originally appeared in his paper "Minds, Brains, and Programs", published in the journal Behavioral and Brain Sciences in 1980.[1] It would eventually become the journal's "most influential target article"[2] and considerable literature has grown up around it. Most of the discussion consists of attempts to refute it: as editor Stevan Harnad notes, "the overwhelming majority still think that the Chinese Room Argument is dead wrong."[3] Pat Hayes has suggested that the field of cognitive science should be defined as "the ongoing research program of showing Searle's Chinese Room Argument to be false."[2] The disagreement is not usually about whether it is wrong: the disagreement is about why. Varol Akman wrote that Searle's paper is "an exemplar of philosophical clarity and purity."[4]
Searle's targets: "strong AI" and computationalism
In 1955, when Herbert Simon and Allen Newell created the first true artificial intelligence program (the Logic Theorist), Simon wrote that they had "solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind."[5] AI founder John McCarthy once said "Machines as simple as thermostats can be said to have beliefs" (as in "I believe it's too hot in here").[6] These are the kinds of statements that the Chinese room argument is designed to attack. Searle calls them "strong AI" and characterizes the position as:
The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.[7]
Searle is not interested in how intelligent machines can behave. (Searle's "strong AI" should not be confused with Strong AI, a term used by futurists to describe artificial intelligence that rivals human intelligence.) The Chinese room argument does not address this issue directly, and leaves open the possibility that a machine could be built that acts intelligently, but doesn't have a mind or consciousness in the same way brains do.[8]
Stevan Harnad has argued that Searle is not attacking the field of AI research at all, but is attacking the philosophical position known as computationalism,[9] "a position (unlike 'Strong AI') that is actually held by many thinkers, and hence on worth refuting."[10] Harnad notes that Searle has repeatedly attacked these positions:
- The mind is a computer program — mental states are just computational states.
- The brain is irrelevant, because computational states depend only on software, not hardware. (Searle writes that, on the contrary, "brains cause minds"[11])
- The Turing test is decisive. (When applied to the mind, this position is called functionalism).
all of which are, according to Harnad, "recognizable tenets of computationalism".[12]
Chinese Room thought experiment
Searle asks us to imagine that many years from now, we have constructed a computer that behaves as if it understands Chinese. The computer takes Chinese characters as input and, following a program, produces other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
Now, Searle asks us to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the computer program, and processes the Chinese characters according to the instructions in the book. Searle notes that he doesn't, of course, understand a word of Chinese. He simply manipulates what to him are meaningless squiggles, using the book and whatever other equipment is provided in the room, such as paper, pencils, erasers, and filing cabinets. After manipulating the symbols, Searle will produce the answer in Chinese. Since the computer passed the Turing test, so does Searle running its program by hand: "Nobody just looking at my answers can tell that I don't speak a word of Chinese," Searle writes.[13]
Searle argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
Replies
There are numerous replies to Searle's argument. Almost everyone agrees that the man in the room does not understand Chinese. The replies can be classified by what they claim to show.[14]
- Those that identify who it is who speaks chinese.
- Those that demonstrate how meaningless symbols can become meaningful.
- Those that suggest that the chinese room should be redesigned more along the lines of a brain.
- Those that demonstrate ways that Searle's argument is misleading.
Some of the arguments (robot and brain simulation, for example) fall into multiple categories.
System and virtual mind replies: finding the mind
These two replies attempt to answer the question: since the man in the room doesn't speak Chinese, where is the "mind" that does?
These replies address the key ontological issues of mind vs. body and simulation vs. reality.
Systems reply.[15] The "systems reply" argues that the whole system, consisting of the room, the man and the cards is what understands Chinese. The man is part of the system, just as the hippocampus is a part of the brain. The fact that the man understands nothing is irrelevant, and is no more surprising than the fact that the hippocampus understands nothing by itself.
Searle's response is to consider what happens if the man memorizes the rules, and keeps track of everything in his head. Then he is the whole system, and yet he still doesn't understand Chinese.[16]
Searle also argues that these critics are missing the point.[17] We are trying to find the mind that understands Chinese. The room as a whole (made of wood, nails and bricks) is nothing like a mind, unless we resort to some kind of dualism:[18] that a mind mystically "emerges" from the system and exists metaphysically.[19]
Virtual mind reply.[20] A more precise response is that there is a Chinese speaking mind in Searle's room, but that it is virtual. A fundamental property of computing machinery is that one machine can "implement" another: any (Turing complete) computer can do a step-by-step simulation of any other machine.[21] In this way, a machine can simultaneously be two machines at once: for example, it can be a MacIntosh and a word processor at the same time. A virtual machine depends on the hardware (in that if you turn off the MacIntosh, you turn off the word processor as well), yet is different from the hardware. (This is how the position resists dualism, the idea that the mind is a separate "substance". There can be two machines in the same place, both made of the same substance, if one of them is virtual.) A virtual machine is also "implementation independent" in that it doesn't matter what sort of hardware it runs on: a PC, a MacIntosh, a supercomputer, a brain or Searle in his chinese room.[22] Cole extends this argument to show that a program could be written that implements two minds at once -- for example, one speaking Chinese and the other Korean. While there is only one system and only one man in the room, there may be an unlimited number of "virtual minds."[23]
Searle would respond that such a mind is only a simulation. He writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."[24] Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't really a calculator', because the physical attributes of the device do not matter."[25] The question is, is the human mind like the pocket calculator, essentially composed of information? Or is it like the rainstorm, which can't be duplicated using digital information alone?
What they do and don't prove. These replies provide an explanation of what in the room is supposed to understand Chinese. They show that the mental state of the man is irrelevant. They prevent Searle from arguing that (1) the man doesn't understand chinese, therefor (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument does not prove "strong AI" is false.[26]
However, the replies, by themselves, do not prove that strong AI is true, either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."[27] Without additional evidence both Searle and his critics are left with the intuitions they had at the start: Searle can't imagine that a simulated mind can "understand" while his critics can.
Robot and semantics replies: finding the meaning
As far as the man in the room is concerned, the symbols he writes are just meaningless "squiggles." But if the chinese room really "understands" what it's saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize.
These replies address Searle's concerns about intentionality and syntax vs. semantics.
Robot reply.[28] Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. Surely then it would be said to understand what it is doing? This would allow a "causal connection" between the symbols and things they represent. Hans Moravec comments: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[29]
Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robots eyes."[30] (See Mary's Room for a similar thought experiment.)
Derived meaning.[31] Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols he manipulates are already meaningful, they're just not meaningful to him.
Searle complains that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, according to Searle, has no understanding of its own.[32]
Commonsense knowledge.[33] Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge. This would provide a "context" that would give the symbols their meaning.
Searle agrees that this background exists, but he does not agree that it can built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.[34] Another criticism is that the meanings here are only defined relative to one another; they are like a vast Chinese-Chinese dictionary — useless if you only speak English.
What they do and don't prove. To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."[35]
However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.
Brain simulation and connectionist replies: redesigning the room
These arguments are all versions of the systems reply that identify a particular kind of system as being important. Like the "robot" and "commonsense knowledge" replies given above, they try to outline what kind of a system would be able to pass the Turing test. By being more similar to brains, they strengthen the intuition that such a machine could "understand."
Brain simulator reply.[36] Suppose that the program instantiated in the rule book simulated in fine detail the interaction of the neurons in the brain of a Chinese speaker. Then surely the program must be said to understand Chinese?
Searle replies that such a simulation will not have reproduced the important features of the brain — its causal and intentional states. Searle is adamant that "human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains."[37] His position, that (only) "brains cause minds" is called "biological naturalism" (as opposed to alternatives like behaviorism, functionalism, identity theory or dualism).[38]
Chinese nation.[39] What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between axons and dendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.
Brain replacement scenario.[40] In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once).[41]
Combination reply.[42] But what if a brain simulation were connected to the world in such a way that it possessed the causal power of a real brain and linked to a robot of the type described above? Then surely it would be able to think.
Connectionist replies.[43] Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.
What they do and don't prove. Arguments such as these (and the robot and commonsense knowledge replies above) recommend that Searle's room be redesigned. They are interpreted in two ways:
- The Chinese room argument works: Searle is right, at least for the room as he describes it. Either he can't pass the Turing test, or if he does, the room wouldn't have a "mind". However, if some improvements are made to the design of the room or the program, a room can be constructed that would have a "mind", "understanding" and "consciousness".[44]
- The Chinese room argument is misleading: Searle is wrong, but it's difficult to see. Redesigning the room more realistically will make it more obvious that Searle is wrong.
In the first case, Searle's replies all point out that, however the program is written, it is still being simulated by a simple step by step Turing complete machine (or machines). Every one of these machines is still, at the ground level, just like Searle in the room: it understands nothing and doesn't speak Chinese.
Searle also argues that, if features like a robot body or a connectionist architecture are required, then strong AI (as he understands it) has been abandoned.[45] Either (1) Searle's room can't pass the Turing test, because formal symbol manipulation is not enough,[46] or (2) Searle's room could pass the Turing test, but it needs more to have "mind." Either way, it denies one or the other of the positions Searle thinks of "strong AI", proving his argument. This interpretation also suggests that computation can't provide an explanation of the human mind: the brain arguments assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works."[47]
In the second case, these are arguments being used as "appeals to intuition." By making the program more realistic, they help AI researchers to visualize how the program might work. Searle's intuition, however, is never shaken. He writes: "I can have any formal program you like, but I still understand nothing."[48]
In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's "blockhead" argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form "if the user writes S, reply with P and goto X". Any program can be rewritten (or "refactored") into this form, even a brain simulation.[49] It is hard to imagine that such a program would give rise to a "mind" or have "understanding", unless one is already convinced that it has to be so.
Speed, complexity and other minds: appeals to intuition
Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."[50] Daniel Dennett describes the Chinese room argument as an "intuition pump"[51] and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it."[52]
The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies.
Speed and complexity.[53] The speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second.[54] Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
Churchland's luminous room.[55] An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They propose an analogous thought experiment: suppose a philosopher is skeptical of Maxwell equations and finds it inconceivable that light is caused by waves of electromagnetism. He could go into a dark room and wave a magnet up down. He would see no light, of course, and he could claim that he had proved light is not a magnetic wave and that he has refuted Maxwell's equations. The problem is that he would have to wave the magnet up and down something like 450,000,000,000 times a second in order to see anything.
Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"[56]
Other Minds reply.[57] Searle's argument is just a version of the problem of other minds, applied to machines. Since it's difficult to decide if people are "actually" thinking, we shouldn't be surprised that it's difficult to answer the same question about machines.
The most radical view is that the chinese room argument actually proves that humans don't have minds, at least not in the sense that Searle insists that we do. Searle argues that there are "causal properties" in our neurons that give rise to the mind. What if these properties don't exist? How could we tell? Perhaps each neuron in the brain is just like Searle, following his rules, utterly unable to give rise to what Searle calls "understanding." Searle's argument suggests that the human mind is epiphenomenal: that it "casts no shadow."[58]
Dennett's Reply from Natural Selection.[59] Daniel Dennett answers Searle with the following argument: suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. (This sort of animal is a called a "zombie" in thought experiments in the philosophy of mind). This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefor, if Searle is right, it's most likely that human beings (as we see them today) are actually "zombies," who nevertheless insist they are conscious. This suggests it's unlikely that Searle's "causal properties" would have ever evolved in the first place: nature has no incentive to create them.
Formal arguments
In 1984 Searle produced a more formal version of the argument of which the Chinese Room forms a part. He listed four premises:
- Brains cause minds.
- Syntax is not sufficient for semantics.
- Computer programs are entirely defined by their formal, or syntactical, structure.
- Minds have mental contents; specifically, they have semantic contents.
The second premise is supposedly supported by the Chinese Room argument, since Searle holds that the room follows only formal syntactical rules, and does not “understand” Chinese. Searle posits that these lead directly to four conclusions:
- No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.
- The way that brain functions cause minds cannot be solely in virtue of running a computer program.
- Anything else that caused minds would have to have causal powers at least equivalent to those of the brain.
- The procedures of a computer program would not by themselves be sufficient to grant an artifact possession of mental states equivalent to those of a human; the artifact would require the capabilities and powers of a brain.
Searle describes this version as "excessively crude." There has been considerable debate about whether this argument is indeed valid. These discussions center on the various ways in which the premises can be parsed. One can read premise 3 as saying that computer programs have syntactic but not semantic content, and so premises 2, 3 and 4 validly lead to conclusion 1. This leads to debate as to the origin of the semantic content of a computer program.
Notes
- ^ Searle 1980
- ^ a b (Harnad 2001, p. 1) Harnad edited the journal BBS during the years the Chinese Room argument was introduced.
- ^ Harnad 2001, p. 2
- ^ In Akman's review of Mind Design II
- ^ Crevier 1993, p. 46 and Russell & Norvig 2003, p. 17.
- ^ Quoted in Searle (1980, p. 6)
- ^ Searle 1980, p. 1. Among the numerous references that define Searle's strong AI are: Searle 1988 : "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." (also quoted in Dennett 1991, p. 435 and at AI Topics), Russell & Norvig 2003, p. 947: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis.", and also see Oxford University Press Dictionary of Psychology (quoted in "High Beam Encyclopedia")
- ^ Cole (2004, p. 14) attributes to AI researchers Simon and Eisenstadt this view: "whereas Searle refutes "logical strong AI", the thesis that a program that passes the Turing Test will necessarily understand, Searle's argument does not impugn "Empirical Strong AI" — the thesis that it is possible to program a computer that convincingly satisfies ordinary criteria of understanding." Turing (1950, p. 12) also felt that intentional states or consciousness may not be necessary for intelligence. He writes: "I do not wish to give give the impression that I think there is no mystery about consciousness ... [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." Russell & Norvig (2003, p. 953) identify Turing's comments as applying to Searle's arguments, and agree that Searle's argument is not an issue for mainstream AI research. They write: "Most AI researchers ... don't care about the strong AI hypothesis." (Russell & Norvig, p. 947)
- ^ Computationalism is associated with Jerry Fodor and Hilary Putnam. (Horst 2005, p. 1) Harnad also cites Allen Newell and Zenon Pylyshyn. (Harnad 2001, p. 3)
- ^ Harnad 2001, p. 3
- ^ Searle 1984
- ^ Harnad 2001, pp. 3–5
- ^ Searle 1980, p. 2-3
- ^ Cole (2004, pp. 5–6). He combines the middle two categories.
- ^ Searle 1980, pp. 5–6, Cole 2004, pp. 6–7, Hauser 2005, p. 2-3 , Russell & Norvig 2003, p. 959, Dennett 1991, pp. 439, Hearn 2007, p. 44 , Crevier 1993, p. 269. Among those who hold to this position (according to Cole (2004, p. 6)) are Ned Block, Jack Copeland, Daniel Dennett, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey
- ^ Searle 1980, p. 6
- ^ Harnad writes that Searle "found it hard to believe that he plus the walls together could constitute a mental state." (Harnad 2005, p. 2) Searle writes "I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with." (Searle 1980, p. 5)
- ^ Searle accuses his critics of dualism: he claims that, if the "mind" in the Chinese room is not a brain, it must be metaphysical, or (in Descartes terminology, made of a different "substance"). (Searle 1980, p. 13)
- ^ Daniel Dennett calls this form of dualism "Woo woo West Coast emergence". (Crevier 1993, p. 275)
- ^ Cole (2004, p. 7-9) ascribes this position to Marvin Minsky, Tim Maudlin, David Chalmers, and David Cole.
- ^ This is the point of the universal Turing machine and the Church-Turing thesis: what makes a system Turing complete is its ability to do a step-by-step simulation of any other machine.
- ^ The terminology "implementation independent" is due to Harnad (2001, p. 4).
- ^ Cole 2004, p. 8
- ^ Searle 1980, p. 12
- ^ Hearn 2007, p. 47
- ^ Cole (2004, p. 21) writes "From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound."
- ^ Searle 1980, p. 6
- ^ Searle 1980, p. 7, Cole 2004, p. 9-11, Hauser 2006, p. 3, Hearn 2007, p. 44 . Cole (2004, p. 9) ascribes this position to Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey
- ^ Quoted in Crevier 1993, p. 272. Cole (2004, p. 18) calls this the "externalist" account of meaning.
- ^ Searle 1980, p. 7
- ^ Hauser 2006, p. 11, Cole 2004, p. 19. This argument is supported by Daniel Dennett and others.
- ^ Searle distinguishes between "intrinsic" intentionality and "derived" intentionality. "Intrinsic" intentionality is the kind that involves "conscious understanding" like you would have in a human mind. Daniel Dennett doesn't agree that there is a distinction. Cole (2004, p. 19) writes "derived intentionality is all there is, according to Dennett."
- ^ Cole 2004, p. 18 (where he calls this the "internalist" approach to meaning.) Proponents of this position include Roger Schank, Doug Lenat, Marvin Minsky and (with reservations) Daniel Dennett, who writes "The fact is that any program [that passed a Turing test] would have to be an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge." (Dennett 1997, p. 438)
- ^ Dreyfus 1979 . See "the epistemological assumption".
- ^ Searle 1984. He also writes "Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them" Searle 1989, p. 45 quoted in Cole 2004, p. 16.
- ^ Searle 1980, pp. 7–8, Cole 2004, p. 12-13, Hauser 2006, pp. 3–4, Churchland & Churchland 1990. Cole (2004, p. 12) ascribes this position to Paul Churchland, Patricia Churchland and Ray Kurzweil.
- ^ Searle 1980, p. 13
- ^ Hauser 2006, p. 8
- ^ Cole 2004, p. 4, Hauser 2006, p. 11. Early versions of this argument were put forward in 1974 by Lawrence Davis and in 1978 by Ned Block. Block's version used walky talkies and was called the "Chinese Gym". Churchland & Churchland (1990) described this scenario as well.
- ^ Russell & Norvig, pp. 956–8 , Cole 2004, p. 20, Moravec 1988, p. ? CHECK, Kurzweil 2005, p. 262 CHECK, Crevier 1993, pp. 271 and 279 CHECK. An early version of this argument was put forward by Clark Glymour in the mid-70s and was touched on by Zenon Pylyshyn in 1980. Moravec (1988) presented a vivid version of it, and it is now associated with Ray Kurzweil's version of transhumanism.
- ^ Searle predicts that, while going through the brain prosthesis, "you find, to your total amazement, that you are indeed losing control of you external behavior. You find, for example, that when doctors test your vision, you hear them say 'We are holding up a red object in front of you; pleas tell us what you see.' You want to cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way that is completely out your control, 'I see a read object in front of me.' ... [Y]our conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same." Searle 1992 quoted in Russell & Norvig 2003, p. 957.
- ^ Searle 1980, pp. 8–9, Hauser ,
- ^ Cole (2004, pp. 12 & 17) ascribes this position to Andy Clark and Ray Kurzweil. Hauser (2006, p. 7) associates this position with Paul and Patricia Churchland.
- ^ This is how Cole (2004, p. 6) characterizes some of these arguments.
- ^ Searle (1980, p. 7) writes that the robot reply "tacitly concedes that cognition is not solely a matter of formal symbol manipulation." Harnad (2001, p. 14) makes the same point, writing: "Now just as it is no refutation (but rather an affirmation) of the CRA to deny that [the Turing test] is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the 'right' kind of implementation, whereas Searle's is the 'wrong' kind."
- ^ Note that Searle-in-the-room is a Turing complete machine
- ^ Searle 1980, p. 8
- ^ Searle 1980, p. 3
- ^ Since a production system is Turing complete.
- ^ Quoted in Cole 2004, p. 13.
- ^ Dennett 1991, pp. 437 & 440
- ^ Dennett 1991, p. 438
- ^ Cole 2004, p. 14-15, Crevier 1993, pp. 269–270, Pinker, pp. 95 . Cole (2004, p. 14) ascribes this "speed" position to Daniel Dennett, Tim Maudlin, David Chalmers, Steven Pinker, Paul Churchland, Patricia Churchland and others. Dennett (1991, p. 438) points out the complexity of world knowledge.
- ^ Crevier 1993, p. 269
- ^ Churchland & Churchland 1990, Cole 2004, p. 12, Crevier 1993, p. 270, Hearn 2007, pp. 45–46 , Pinker 1997, p. 94
- ^ Harnad 2001, p. 7 and Tim Maudlin (Cole 2004, p. 14) both criticize these replies, which are versions of strong emergentism (what Daniel Dennett derides as "Woo woo West Coast emergence" (Crevier 1993, p. 275)). Harnad ascribes this view to Churchland and Patricia Churchland. Kurzweill (2005) also makes this kind of argument.
- ^ Searle 1980, Cole 2004, p. 13, Hauser 2006, p. 4-5. Turing (1950) makes this reply to what he calls "The Argument from Consciousness." Cole (2004, p. 12-13) ascribes this position to Daniel Dennett, Ray Kurzweil and Hans Moravec.
- ^ Russell & Norvig, p. 957
- ^ Cole 2004, p. 22, Crevier 1993, p. 271, Harnad 2004, p. 4
References
- Churchland, Paul; Churchland, Patricia (January 1990), "Could a machine think?", Scientific American, vol. 262, pp. 32–39
{{citation}}
: CS1 maint: date and year (link) - Cole, David (Fall 2004), "The Chinese Room Argument", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy
{{citation}}
: CS1 maint: date and year (link). - Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3..
- Dennett, Daniel (1991), Consciousness Explained, The Penguin Press, ISBN 0-7139-9037-6.
- Fearn, Nicholas (2007), The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers, New York: Grove Press
- Harnad, Stevan (2001), "What's Wrong and Right About Searle's Chinese Room Argument", in M.; Preston, J. (eds.), Essays on Searle's Chinese Room Argument, Oxford University Press.
- Harnad, Stevan (2005), "Searle's Chinese Room Argument", Encyclopedia of Philosophy, Macmillan.
- Hauser, Larry (1997), "Searle's Chinese Box: Debunking the Chinese Room Argument", Minds and Machines, 7: 199–226.
- Hauser, Larry (2006), "Searle's Chinese Room", Internet Encyclopedia of Philosophy.
- Kurzweil, Ray (2005), The Singularity is Near, Viking Press
- Moravec, Hans (1988), Mind Children, Harvard University Press
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
- Pinker, Steven (1997), How the Mind Works, New York, NY: W. W. Norton & Company, Inc., ISBN 0-393-31848-6
- Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences, 3 (3): 417–457. See also Searle's original draft.
- Searle, John (1983), "Can Computers Think?", in Chalmers, David (ed.), Philosophy of Mind: Classical and Contemporary Readings, Oxford, pp. 669–675, ISBN 0-19-514581-X
{{citation}}
: CS1 maint: location missing publisher (link). - Searle, John (1984), Minds, Brains and Science: The 1984 Reith Lectures, Harvard University Press, ISBN 0-67457631-4 paperback: ISBN 0-67457633-0.
- Searle, John (January 1990), "Is the Brain's Mind a Computer Program?", Scientific American, vol. 262, pp. 26–31
{{citation}}
: CS1 maint: date and year (link). - Searle, John (1992), The Rediscovery of the the Mind, Cambridge, Massachussetts: M.I.T. Press.
- Turing, Alan (October 1950), "[[Computing machinery and intelligence]]", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423
{{citation}}
: URL–wikilink conflict (help)CS1 maint: date and year (link).
Further reading
- Wikibooks: Consciousness Studies
- Dissertation by Larry Stephen Hauser,
- Searle's Chinese Box: Debunking the Chinese Room Argument. Larry Hauser. available at http://members.aol.com/lshauser2/chinabox.html
- Philosophical and analytic considerations in the Chinese Room thought experiment
- Interview in which Searle discusses the Chinese Room
- Understanding the Chinese Room (critical) from Zompist.com
- A Refutation of John Searle's "Chinese Room Argument", by Bob Murphy
- Nils Nilsson, "A Short Rebuttal to Searle", Nov 1984
- Peter Kugel, "The Chinese Room Is A Trick" Peter Kugel, "The Chinese Room Is A Trick". Critical paper based on the mistaken assumption that the CR has no "memory"
- Wolfram Schmied "Demolishing Searle's Chinese Room", arXiv:cs.AI/0403009
- Margaret Boden, "Escaping from the chinese room" Heil, pp. 253-266 (1988)