Why keep bringing up the Chinese Room. The guy is just a substrate, it's the necessarily intelligent set of rules he's performing (for astronomically large time) that is conscious.
The Chinese Room argument is one of the least convincing arguments against AI. Of course the man in the room isn't conscious neither is the individual neurons in your brain. It's the whole house that become conscious.
I'm sure most people would agree no conscious experience of understanding would emerge from the combination of man + rule set.
There's a subtle dishonesty in the Chinese room experiment, because it's asking you to imagine a system which is many orders of magnitude too small to have an intelligent conversation in Chinese (or in any other language).
Using some back-of-the-envelope numbers from Moravec, based on his research into visual algorithms that duplicate the human retina, the absolute minimum number of operations to do what the brain does would appear to be around 10^15 operations per second (and possibly much, much higher).
If we assume that man in the room can perform one operation per second, and that we need to provide at least 10 seconds of normal conversation, that means the man in the room will need to work non-stop for 310 million years. (If you said, "No, more like 30 billion," I'd say, "Sure, that's totally plausible." Or even 30 trillion.)
Now, if we assume that the man spends, say, 5 million years working down one cortical column and up another, modelling complex patterns of visual recognition, language analysis, vocabulary selection, and so on, then my intuition no longer tells me whether or not there's an immense, glacial consciousness playing out inside that room. While we wait for an answer new phyla will evolve, entire species will arise and go extinct, and so on. If we've even slightly underestimated the computation required, the sun will turn into a red giant and wipe out life on the planet before the Chinese room responds, "Hey, how are you doing?"
There's a second possibility here: Maybe the Chinese room isn't a very slow computer. Maybe it's a giant lookup table, containing every possible Chinese conversation. I'm pretty sure a lookup table isn't conscious. But if we use a lookup table, then it's going to need to be unimaginably larger than our entire universe.
If Searle wants to appeal to intuition, he needs to make his appeals realistic. He can't sweep 300+ million years under the rug, for example. Or hide a lookup table which makes the entire universe look like a speck of dust in comparison.
The Chinese room argument is a parlour trick that uses scale as a distraction. It posits a person in a room manipulating symbols to produce intelligent seeming outputs. It says, see, it’s absurd to think a person in a room with a stack of symbols could emulate intelligence.
But let’s say the room contains many billions of people, it is the size of a planet, and it contains racks of many trillions of symbols, and it spends millions or billions of years to produce an output. That’s more like the scale of a sophisticated computer system, or a brain.
Does that sound much like a man in a room with some symbols? No. Does it sound like that could do complex calculations and produce sophisticated and perhaps even intelligent outputs? Well, given enough time and scale, yes why not?
The Chinese room is pure misdirection and it amazes me anyone falls for it. There’s really no actual argument there.
I don't understand the premise of the Chinese Room argument. The human in the room is a red herring - he's acting as a computer executing a provided program. A computer can be made from integrated circuits, discrete transistors, vacuum tubes, mechanical components, or even a man in a room following simple instructions and writing down the results.
The man is not the source of the intelligence, the program is (or rather, the execution of the program in an emergent manner - obviously, if the program does not get executed, it cannot "understand" Chinese). Remove the program and the man is unable to process the Chinese input, just like a modern CPU if you remove a program from memory.
The choice of conducting the experiment in Chinese is just another misdirection. The only thing that the experiment is truly positing is, assume a human is executing a program that passes the Turing test rather than a traditional silicon computer.
In the original Chinese Room thought experiment it’s a human inside performing all the calculations by hand. To be consistent you would also believe that this room with a human in it has its own consciousness as a whole, correct? This just seems too weird to me.
The Chinese Room is useful as an idea or model, even if you don't agree with the original interpretation. I am fond of the view that the room is potentially conscious, with human or machine worker. The worker should expect to understand the conversation they are mechanically carrying out no more than your individual brain cells ought to expect to understand.
The argument is, that anything resembling a chinese room is such an advanced and complex system that
it becomes indistinguishable from an actual conscious being.
The Chinese room is a silly example, in which a highly simplified physical system that doesn't appear to have conscious thought is extended to the idea that no merely physical system has conscious thought. If the structure of the Chinese room was not a simple catalog of questions and answers, but hundreds of billions of complicated cardboard-and-marble apparatus, each processing some amount of data in a way that individually seemed meaningless, but together produced cogent speech... would you still be so sure it wasn't conscious?
The chinese room thought experiment is simply annoying because it doesn't even get the questions right. It basically asks if the hardware becomes conscious/understanding when running software - instead of asking the real question - if the software can gain the understanding/consciousness. The man in the room is just a hardware component replacing a computer processor. But the only place to look for intelligence in this experiment should be in the rule-system, not in the processor executing those rules. As those rule systems decide if the communication makes sense or not. To make a real conversion the rule-system needs flexibility to handle dynamic input and thereby creating a dynamic flow between the inside and outside of the room, but that flow is about symbol manipulation, not about the symbol manipulator. I think no one in AI ever even argued that the hardware part of a computer would gain understanding.
The only good part about this thought experiment is that it makes the hard question of consciousness - how can it arise from physical phenomena - somewhat more obvious. But it's still the same question as for normal human minds and neither an argument for software systems becoming conscious nor against it.
The "Chinese Room" argument isn't an argument against consciousness being computation.
In the same way the man in the room doesn't understand Chinese to be able to produce Chinese, the mind in your brain does not need to understand consciousness to be conscious.
The man in the room can be said to pass the Turing test, and so can you.
The Chinese Room thought experiment can also be used as an argument against you being conscious. To me, this makes it obvious that the reasoning of the thought experiment is incorrect:
Your brain runs on the laws of physics, and the laws of physics are just mechanically applying local rules without understanding anything.
So the laws of physics are just like the person at the center of the Chinese Room, following instructions without understanding.
No, his position, as I understand it, is that it cannot be conscious. It certainly can be intelligent.
Searle does try to explain why there's a difference. Although the person in the Chinese Room might be conscious of other things, he has no consciousness of understanding the Chinese text he's manipulating, and will readily verify this, and nothing else in the room is conscious. Chinese speakers are conscious of understanding Chinese.
We might be on opposite sides of the Chalmers/Dennett divide, but we don't know how consciousness (as opposed to human-level intelligence) arises. Here's my reasoning for why the Chinese Room isn't (phenomenally) conscious: http://fmjlang.co.uk/blog/ChineseRoom.html
The standard argument against the chinese room is that if the 'symbols' are at lower level, audio, video stimuli, and you have enough scale, then you can imagine intelligence, emerging somehow.
Qualian chauvinism. The ‘observer’ field doesn’t have to be locally or real-ly connected. E.g. a current is real, but it doesn’t flow through a real-connected surface.
Chinese room is conscious and intelligent by definition if the above is true, just not human-like conscious and intelligent.
The juice of this question is that we may want to determine it in a practical binary sense, when there may be a spectrum or an entire space of it.
Sadly it can all boil down to a boring “shut up and interact” principle again.
It correctly points out that the person in the room doesn't know Chinese but it misses the bigger point.
Of course, the person in the room doesn't know Chinese just as the individual neuron in the brain doesn't know Chinese.
The room or the house (i.e. the feedback loop) is the conscious part.
The correct answer is "we don't know". But we do know that we are ourselves the result of emerging complexity built by much simpler structures and that they are built on even simpler structures until in the end some primordial soup.
The mistake is isolating any individual element in the feedback loop and claiming that because that doesn't have some "magic" properties the system as a whole can't.
reply