I would say that Chinese Room is just simply a subclass of recursive nonsense that solely rely on ignorance of emergent properties of substance.
Likewise Chinese Roomer must never reject the idea that human means nothing because human's composite parts a.k.a atoms do not think or talk or being 'conscious'.
But they can't reject. Because they must stand for the point that human is conscious. Simultaneously this bring to a contradictory.
It is mind boggling why folks think it is valid argument.
The Chinese Room argument is the most sloppy argument ever made in that area. The real answer of course is we don't know.
Just like the individual neuron in your brain do not understand Chinese so does the person in the room not understand it either. The entire house though does.
The Chinese Room argument is one of the least convincing arguments against AI. Of course the man in the room isn't conscious neither is the individual neurons in your brain. It's the whole house that become conscious.
It correctly points out that the person in the room doesn't know Chinese but it misses the bigger point.
Of course, the person in the room doesn't know Chinese just as the individual neuron in the brain doesn't know Chinese.
The room or the house (i.e. the feedback loop) is the conscious part.
The correct answer is "we don't know". But we do know that we are ourselves the result of emerging complexity built by much simpler structures and that they are built on even simpler structures until in the end some primordial soup.
The mistake is isolating any individual element in the feedback loop and claiming that because that doesn't have some "magic" properties the system as a whole can't.
We agree. The Chinese Room Argument offers little insight into consciousness. All it really does is to take someone with confused thoughts on how 'understanding' works (specifically the way intellectual competence interacts with consciousness), and to tie them in a knot.
It fails to demonstrate anything interesting about consciousness. It certainly doesn't demonstrate that computer systems can never be conscious in the way we can. Nothing in the argument applies any more or less to neurons than to transistors.
I'm baffled...you give a capsule version of the argument that many many philosophers have given, minus all the details that are necessary to have a theory, and then conclude that this is the problem with analytic philosophy?
The vast majority of analytic philosophers think the Chinese Room argument is unsound. Some think it is obviously and utterly unsound. Others think it is subtly unsound.
The Chinese room thought experiment is based on logically unsound reasoning and I don’t understand why it has taken hold as widely as it has. The logical argument is that a composite object of a book, room, and person following directions cannot understand Chinese, but there is no justification for this claim—it is, as far as I can tell, a complete non-sequitur. Perhaps I just misunderstand it?
Because the logic underlying the Chinese room argument is unsound, it naturally leads to some bizarre conclusions. I think it’s a bit of a tragedy that it’s a staple of philosophy 101 courses, unless you use it as an example for teaching logic by dissecting it and showing how it is unsound.
Personally my view is that as much as we need our scientists and ML researchers to study epistemology, we need our epistemologists to study science and mathematics. Any coherent and comprehensive epistemological system in the future will have to somehow synthesize things like substance monism, post-Fisherian / post-Bayesian statistics, post-Popper science, and computational complexity theory. Just as too many ML researchers are reluctant to dip into philosophy and reinvent a poorer version of old philosohpical arguments, there are too many philosophers that would rather e.g. rehash some ideas of Hegel rather than step outside their own field long enough to synthesize new ideas.
Searles Chinese Room argument is very typical of the circular reasoning that many analytical philosophers, unfortunately, run into.
Of course the person in the room doesn't understand Chinese just like the individual neurons in my brain doesn't either.
It's the entire house that's conscious.
One reason why the mind is so hard to grasp and why people like Searle ends up with something like The Chinese Room argument which is really, to be honest a very sloppy argument is because of our obsession with turning it into a thing which can be located.
A much more fruitful way to think about the mind is as a pattern recognizing feedback loop and then reason from there.
That also gives you a much better idea way to think about evolution and how not just the conscious but the self-aware conscious mind come to be.
Gregory Bateson has some really interesting thoughts on that IMO.
He doesn't have an argument, in my opinion. There is nothing to refute. It is as if somebody was saying "look at this green spot, it is obviously red".
I guess the only way to "refute" it would be to create complete theory of human language and use it to prove that he isn't actually saying anything. But that doesn't seem very effective or promising as an undertaking to me.
Rephrased, maybe the problem with the chinese room is that it uses the fuzzy notion of consciousness, appealing to the vague emotions of it's audience rather than on logic. In so far as philosophy aims to clarify language, perhaps the chinese room could be useful as a bad example.
The Chinese Room is a argument for solipsism disguised as a criticism of AGI.
It applies with equal force to apparent natural intelligences outside of the direct perceiver, and amounts to “consciousness is an internal subjective state, so we thus cannot conclude it exists based on externally-observed objective behavior”.
The Chinese Room argument has always seemed to me a powerful illustration of the problem that "consciousness" is so poorly defined as to be not subject to meaningful discussion dressed up as a meaningful argument against AI consciousness.
Its always distressed me that some people take it seriously as an argument against AI consciousness; it does a better job of illustrating the incoherence of the fuzzy concept of consciousness on which the argument is based.
The Chinese room argument itself isn't very compelling. Surely the constituent parts of the brain are fundamentally governed solely by physics, surely thought arises solely from the physical brain, and surely the constituent parts (and thus thought) could be described by a sufficiently complex discreet computation.
It seems to me that the argument is kind of silly. The "Chinese Room" has failed to account for the actual processes of the system. The 'spoken response' is only a small part of the 'program' of a mind; the rest is the internal processes that constitute consciousness that can infer, compare, reinforce, and associate. Consciousness is the whole of the system, not a functional IO.
Maybe there's a small human inside the human's brain, not understanding the rules the human is following, and it's only the innermost human that is really conscious?
I think Searle's Chinese room argument is absolutely nonsensical, a bit like arguing about philosophical zombies. It's an argument that only makes any sense if you already are committed to mysticism or dualism.
I think I've never completely understood the Chinese room argument, to me it seems like that whole chain of examples and counter examples are the product of people playing word games and discussing definitions.
If your 'room' 'understands', but no part of the 'room' can be attributed with having 'intelligence', doesn't it stand to reason that the _room_ itself is the intelligent actor? I really don't see the problem with the fact that the human 'cpu' doesn't speak Chinese, the room itself 'understands' Chinese just fine.
> This is no different in essence than Searle's Chinese Room problem, which at its core asks "If the parts aren't conscious, how can the gestalt be?"
The answer to that question is “consciousness is a property of the interaction between the parts, not of the individual parts.” Or, alternatively, “consciousness is not a well-defined objective property, just a vague incoherent concept that has lots of emotional attachment, but which you can't analytically say is or is not present in any entity or aggregate.”
The Chinese Room is useless as anything other than an as an overly elaborate illustration that there isn't a useful, clear understanding of what “consciousness” means.
Of course the man in the room doesn't know Chinese, neither does your individual neurons. It's the entire house that's the conscious part if anything.
The real answer of course is we don't know.
reply