Back to top

The Chinese Room

Forums: 

Image by Free-Photos from http://Pixabay.com

Once upon a time, a man was locked up in a room and made to do some extremely boring work, running a computer program by hand. He was given the instructions from the computer program written out in ordinary English, and a lot of data written in Chinese, which he did not understand. The computer program was designed to provide answers to simple questions about a story which was included in the data. By following the English instructions, the man could reproduce (rather slowly) the behaviour of the computer, manipulate the data in the same way, and so generate the same outputs. But he could not read either the inputs, or the outputs he generated, which were both in Chinese, and so it is absurd to maintain that what he did shows any understanding of the input information, although from the point of view of Chinese speakers outside the room, he was giving appropriate answers. Since he was doing exactly what a computer does, it follows that the computer does not, by virtue of running any particular program, have any understanding of what the program is about, or of its results, either. Doesn't it?

John Searle , who was of course the author of this tale, mentions a number of objections to his conclusions, none of which he finds convincing. He gives them names and attributes them to institutions, creating the rhetorically useful impression that whole schools of philosopers have laboured in vain for generations to come up with arguments against him (Berkeley seems to have done the most work). 

 
1. (The Systems reply - Berkeley) The man and the instructions together form a system to which understanding can be attributed.
2. (The Robot reply - Yale) If the computation controlled a robot rather than merely answering questions, it would show real understanding.
3. (The Brain Simulator reply - Berkeley and MIT) What if the instructions simulate the working of a brain, synapse by synapse?
4. (The Combination reply - Berkeley and Stanford) Maybe the foregoing are not fully convincing alone, but if combined they make a strong case.
5. (The Other Minds reply - Yale) But look, how can you diagnose understanding other than by examining responses?
6. (The Many Mansions reply - Berkeley) Perhaps after all digital computers don't understand, but another kind of machine, as yet undiscovered, might.

Taking the objections in reverse order, Searle has no problem with the idea that some machine other than a digital computer might one day be conscious: he accepts that the brain is a machine, anyway. The practicalities of diagnosing consciousness are not the issue; the point is what it is you are trying to diagnose. Of course Searle is not impressed by the mere combination of arguments he has rejected individually. Simulating a brain is no good; a simulation of rain doesn't make you wet: you could simulate synapses with a system of water pipes which the man in the room controls: just as obviously as in the original example, he still doesn't understand the stories he is asked about. Using the outputs to control a robot rather than answer questions makes no difference and adds no understanding. It seems highly implausible to attribute understanding to an arbitrary 'system' made up of the conjunction of the man and some rules. If necessary, the man can memorise the rules: then the whole 'system' is in his memory, but he still doesn't understand the Chinese.

http://www.consciousentities.com/stories.htm#turing

Member Content Rating: 
5
Your rating: None Average: 5 (1 vote)