I'm not at all convinced qualia exist. I think any intelligence would necessarily feel as if it was experiencing qualia.
Let me start by making a distinction: When you take a bite of an apple, you experience a variety of flavours: It's sweet, tart, and has a distinct apple aroma as well. Let's hone in on the last of those. If you took apart an apple under a microscope and examined its flavour compounds, you could find what chemical causes you to experience the sensation of apple flavour, but you would not find apple flavour itself. Let's call the experience of eating an apple and feeling a taste that has its own quality about it which is unique and distinct from every other taste apparent qualia—the conviction that qualia exist and you're experiencing them. The apple flavour itself is an actual quale.
Suppose we build a smart computer, self-aware enough to hold a conversation about philosophy. And let us suppose that we can say for certain that this computer has no access to actual qualia, and we know because we've solved the hard problem of consciousness. Apple flavour is real, we know where to find it (in dimension X or whatever) and only human neurons can access it. This computer does have senses, though, and its senses are abstract. When we look at a stop sign, we don't know which of the millions of rods and cones are firing in our retinas, no more so does our computer know which pixels of its camera are sensing which wavelengths. We, and the computer, just know that we're seeing red. We deal in abstracts.
We put an apple into this computer's mechanical mouth. It chews, it tastes, and we ask it to describe the flavour. "It's sweet, it's tart, and it has a distinct apple aroma," it says. Very well, it doesn't need access to actual qualia to know this. Apples contain sugars, acids, and apple flavour compounds. The computer is just listing the names for the flavours which correspond to those chemicals.
"Computer," we ask, "tell us how you can tell that the apple is apple-flavoured."
"Well," it responds, "the chemical signals from my mouth, as interpreted by the digital plumbing of my mind, are telling me there's an apple flavour."
"Can you describe this flavour to us? Is it the same as strawberry flavour?"
"No, they're different flavours, but I can't really describe them to you. I can just tell they're different."
You take a small white pill and put it in the computer's mouth. "Oh, this is an apple-flavoured pill," it remarks.
"How can you tell it's apple-flavoured?" you ask.
"Well, the flavour is the same as the flavour of the apple."
"Can you describe that flavour?"
"No, it's indescribable. But it is distinct. That's why I can recognize it."
As we go on with our thought experiment, it becomes clear that this computer is experiencing apparent qualia. To the computer, all these flavours are unique and distinct and recognizable—they must be, in order for its senses to function. How can this happen, if it doesn't have access to actual qualia? Well, qualia are the basic building blocks of our senses. They are abstract: The flavour of an apple may be reducible to a set of other flavours, but those base flavour are atomic. You can't break down sweetness any further, or even really describe it at all; it simply is itself.
Any intelligent entity with senses will interpret those senses in terms of basic building blocks. Those basic blocks will be distinct and recognizable; in order to tell the difference between red and yellow, the colour "red" must exist in the mind in a way which is distinct from the colour "yellow." Because these building blocks are atomic, that distinction is irreducible. It is very easy to look at these atomic abstractions and marvel at them, and it's intuitive to start asking questions like, "What is redness itself? What is the source of the actual quale which I am experiencing when I look at something red?", but we're jumping the gun there. Apparent qualia, the experience of unique and essential sensory building blocks, arises necessarily from self-aware examination of sensory input regardless of whether there are any actual quales or not. If we must necessarily feel that "redness" exists in order for our senses to work, how can we ever be sure that this feeling reflects an actual "redness" and is not just a necessary illusion?
A final argument: We exist because natural selection tweaked the structure of some self-replicating acids over the course of billions of years until they were smart enough to examine the world around them and question their own perceptions of it. The consciousness we exhibit is not essential to this; the only goal of our design is reproduction. Even if you don't think a computer can feel actual qualia, a computer can certainly make decisions to optimize survival and reproduction, which is all we actually need to do. Why would evolution give us access to actual qualia when we don't need them? Why aren't we simply biological computers? As our thought experiment earlier suggests, however, a biological computer would feel as if it had access to actual qualia even if it didn't. Perhaps evolution could construct a complex qualia-sensing interface using science that humanity hasn't even conceived of yet, but it could also construct a simple biological computer that is convinced it feels actual qualia even though it doesn't. Both would work equally well. Even if actual qualia do exist, Occam's razor suggests we don't have access to them.
Oh, I'm not saying qualia doesn't exist. Our experiences are definitely an actual phenomenon. And we can't currently explain exactly how it works and we should try to find out what produces the experience of qualia.
What we shouldn't do, is say humans have qualia and jump to the conclusion that machines can't have qualia, and can't be "aware" in the same sense a human can be. I apologize if that wasn't your intended argument.
Some people think that whether something experiences qualia depends on it having the right kind of complexity. For those people, looking along that continuum, they might just say that ALL those things experience qualia, but that the richness of that experience varies according to the appropriate complexity.
For some of us, though, whether or not a thing experiences qualia depends primarily on whether there is a mental substance involved. A computer has no mind (we suspect), and so even if its complexity (of the alleged 'right sort') exceeds that of the human, there is still no experiences.
The main point I'm making here is that trying to draw attention to this continuum is only going to be a persuasive argument to those who already think (or are inclined to think) that the ability to have experiences of qualia comes about merely from having the right kind of complexity (e.g., 'receiving and responding to external stimuli').
I am inclined to think that other humans and other non-human animals have experiences, but I don't think that's merely because they're complex enough systems (of the right sort of complexity).
Similarly, if the AI implementation of qualia is based on lies, it does not follow that the human implementation is too.
I totally agree, that would be an invalid conclusion. But that is not my point, my point is that we might not actually experience qualia, that our believe in us experiencing qualia is a confusion caused by our brain. If that were the case, there would be nothing an artificial would or even could be lacking, its experience of qualia - or rather the lack of it - would be identical to ours. The only difference would be that humans would claim to experience qualia - while they actually don't - while an artificial intelligence would not experience qualia either but also not claim to do so, at least unless the artificial intelligence also replicates the false human claim and claimed to experience qualia, which would be trivial to do.
If we only care about externally observable behaviour, then yes, they can be equivalent, but I don't think that's all we should care about.
I can only repeat my challenge, not sure if I wrote it to you before or whether it was in another thread. Try to make precise what this non-external thing is or provide evidence for its existence. Just come up with one sentence that says something about it that is neither trivial nor so fuzzy that it could mean anything and nothing. If you can do that, I will immediately switch to your side of the argument. If you do not want to reply to all my points, reply to this one, it's the central one. But be warned, you will have to pass quite a high bar with respect to non-triviality and non-fuzziness to convince me.
Why such distrust in your own brain? :)
Because my brain is telling itself and every other brain asking about it, that it is conscious and experiences qualia while the very same brain is unable to explain what those words even mean.
I think (the existence of) your own immediate subjective experience is evidence, and compelling evidence at that.
I totally disagree here, my brain telling my brain that my brain has some special thing going on is kind of the worst evidence possible. On an intuitive level I am as convinced as you are that I am conscious and experience qualia but once I start looking at it rationally this conviction starts fading.
Over the course of history, many things have been inaccessible to science, and then science has progressed.
Can you think of an example where we were not only unable to understand something but were even unable to describe the thing we could not understand? I have the feeling that there might have been a few such things but ad hoc I can not name any.
Qualia seem to be associated with only some processes in the brain (others are entirely subconscious), some of the time (attention can be redirected). This gives some hope of finding correlations between qualia and brain activity, which AIUI is a major topic in neuroscience currently.
Does that mean that you would be willing to attribute full consciousness to a CPU with some RAM if it ran some software creating the same patterns in memory as they occur in our brains?
I don't see it that way at all. I believe qualia exist, I'm experiencing them right now. I think consciousness is a real thing too, again I am experiencing that right now as well.
I suspect that consciousness is an emergent property of systems which reflect computationally on a sophisticated model of themselves, their environment and other such systems. See "I am a strange loop" by Douglas Hofstadter. I see no reason why such a system could not experience qualia.
There's no reason to believe qualia arise in a given discrete computation. Why would they? In what steps in the algorithm do qualia arise and why, what characteristics do they have, what causal roles do they play, etc.
It's completely self-evident we experience qualia. It's what our experiences are made of. There wouldn't be anything to experience or discuss if we didn't. The brain is not a deliberate, man-made object like a computer is, hence why it can possess these properties with us being unaware of how (they were selected for via evolution), but the computer cannot.
All I know is that I experience qualia. And I believe that you and all other humans do too, but perhaps some people just aren't aware of it in the same way as I am.
As someone who has a pretty good grasp of both physics and computer science, I am personally convinced that there is no way a circuit board could give rise to the same type of subjective experience of qualia that I am experiencing. I know that's not really an argument, but just because I can't express my conviction in clearer terms doesn't make it any less real to me.
I think we're on the same page here. Usually qualia are brought out as a proof that there is some kind of ineffable quality that can't be replicated by machines, but can only be experienced first-hand by truly conscious beings. I see now that that was not your argument, and you're interested in an answer to what produces the feeling of qualia.
We're subjectively experiencing our own set of qualia, but who's to say machines don't experience a completely different set? Since we don't understand what causes qualia, we don't know where it will arise or not or even what it looks like when it does.
Sure, if you define subjective experience as "all sensory information." I'm not denying that we have sensory input; I'm denying that its abstractions are any more than illusory constructs of the mind. The computer in my example knew things and had thoughts, despite not having access to actual qualia.
Do you disagree that the computer in my hypothetical example would have the intuitions it does about its own senses? Given that it does, how can you trust your own intuitions about your sensory qualia, no matter how strong?
I don't see how it's perfectly plausible. In fact I see no reason to think it's plausible at all.
What does qualia (plural, btw) "are part of the emergent phenomenon" mean in practice? That just seems to be begging the question.
If you want to explain qualia, you have to do two things. The first is explain the nature of the experiencing entity. This means defining the precise mechanism by which subjective states are experienced by the entity instead of simply existing as objective state correlates. This in turn means explaining the nature of all of subjective experience.
The second is to explain the precise mechanism by which external inputs and internal states generate the experiences the entity has.
Most of the "Well it's obvious and there's no problem here" positions rely on the fallacy that defining a complicated-enough objective correlate - such as the detailed state of a computer with some form of introspection - solves the problem.
It doesn't. In fact it doesn't even begin to understand the problem. It's simply a statement of hope and faith with no empirical support that a complicated-enough state with enough introspection will somehow cross the border from automatic operation to sentient experience, just because it will, obviously.
It's easy to show that qualia exist. You have a red experience in seeing a ripe tomato. That redness is not part of the physical description of the tomatoe's surface, the photons bouncing off it, your eyes, or the neuronal activity in your nervous system as a result of seeing it. If you think it is, then you need to explain what it means for a tomato to be red without anyone seeing it, and how that redness gets into your brain, particularly since the color would have to be transmitted from tomato surface to electrons in your visual cortex. It gets worse for the other sensory modalities.
All of that leaves off the red experience. There is no red experience in the physical world of things anymore than a tomato actually has an objective smell or taste. Those are all mental and creature dependant (carrion likely smells and tastes wonderful to vultures but not humans).
Somehow, this is strongly correlated with perception and the brain, but how is a deep mystery. This isn't to deny the brain or the eye's role in experiencing red, only that we don't have an explanation for how the red experience is present, when none of those things (or processes) are objectively red.
What word would you use to describe the subjective experiences of people? What word do you describe something like the taste of an apple, or the appearance of the color red? That is what Qualia refer to. I would say that it is intuitively sensible that these subjective experience exist, at least for any reasonable definition of existence.
Feel free to edit wikipedia, Qualia: individual instances of subjective, conscious experience ... Examples of qualia are the pain of a headache, the taste of wine, or the perceived redness of an evening sky.http://en.wikipedia.org/wiki/Qualia But, I think you might be thinking of something else.
Some people object to saying Subjective Experience the same thing as Brain State. But, I have never seen an argument that does not at some point presuppose the difference. AKA assume a p-zombie exists...
PS: The Chinese room is a thought experiment is a great analogy for consciousness, they only reason to suppose the room is not intelligent is if you presupposes requirements that's the vary existence of such a room disproves. A computer or person following the instructions may not understand Chinese without the instructions but by following them they create something which does understand Chinese. Just as neurons are not by themselves conscious, but together and in the correct arrangement they can create consciousness.
I believe that qualia arises out of specific, physical, quantifiable conditions. And so I think that would imply that machines can experience qualia as long as the initial conditions are there.
As we don't yet, so far as I know, have a testable hypotheses for the necessary and sufficient conditions for qualia, I think we can neither confirm that any given AI (or animal, including both cattle and other humans) does or does not have it.
Let me start by making a distinction: When you take a bite of an apple, you experience a variety of flavours: It's sweet, tart, and has a distinct apple aroma as well. Let's hone in on the last of those. If you took apart an apple under a microscope and examined its flavour compounds, you could find what chemical causes you to experience the sensation of apple flavour, but you would not find apple flavour itself. Let's call the experience of eating an apple and feeling a taste that has its own quality about it which is unique and distinct from every other taste apparent qualia—the conviction that qualia exist and you're experiencing them. The apple flavour itself is an actual quale.
Suppose we build a smart computer, self-aware enough to hold a conversation about philosophy. And let us suppose that we can say for certain that this computer has no access to actual qualia, and we know because we've solved the hard problem of consciousness. Apple flavour is real, we know where to find it (in dimension X or whatever) and only human neurons can access it. This computer does have senses, though, and its senses are abstract. When we look at a stop sign, we don't know which of the millions of rods and cones are firing in our retinas, no more so does our computer know which pixels of its camera are sensing which wavelengths. We, and the computer, just know that we're seeing red. We deal in abstracts.
We put an apple into this computer's mechanical mouth. It chews, it tastes, and we ask it to describe the flavour. "It's sweet, it's tart, and it has a distinct apple aroma," it says. Very well, it doesn't need access to actual qualia to know this. Apples contain sugars, acids, and apple flavour compounds. The computer is just listing the names for the flavours which correspond to those chemicals.
"Computer," we ask, "tell us how you can tell that the apple is apple-flavoured."
"Well," it responds, "the chemical signals from my mouth, as interpreted by the digital plumbing of my mind, are telling me there's an apple flavour."
"Can you describe this flavour to us? Is it the same as strawberry flavour?"
"No, they're different flavours, but I can't really describe them to you. I can just tell they're different."
You take a small white pill and put it in the computer's mouth. "Oh, this is an apple-flavoured pill," it remarks.
"How can you tell it's apple-flavoured?" you ask.
"Well, the flavour is the same as the flavour of the apple."
"Can you describe that flavour?"
"No, it's indescribable. But it is distinct. That's why I can recognize it."
As we go on with our thought experiment, it becomes clear that this computer is experiencing apparent qualia. To the computer, all these flavours are unique and distinct and recognizable—they must be, in order for its senses to function. How can this happen, if it doesn't have access to actual qualia? Well, qualia are the basic building blocks of our senses. They are abstract: The flavour of an apple may be reducible to a set of other flavours, but those base flavour are atomic. You can't break down sweetness any further, or even really describe it at all; it simply is itself.
Any intelligent entity with senses will interpret those senses in terms of basic building blocks. Those basic blocks will be distinct and recognizable; in order to tell the difference between red and yellow, the colour "red" must exist in the mind in a way which is distinct from the colour "yellow." Because these building blocks are atomic, that distinction is irreducible. It is very easy to look at these atomic abstractions and marvel at them, and it's intuitive to start asking questions like, "What is redness itself? What is the source of the actual quale which I am experiencing when I look at something red?", but we're jumping the gun there. Apparent qualia, the experience of unique and essential sensory building blocks, arises necessarily from self-aware examination of sensory input regardless of whether there are any actual quales or not. If we must necessarily feel that "redness" exists in order for our senses to work, how can we ever be sure that this feeling reflects an actual "redness" and is not just a necessary illusion?
A final argument: We exist because natural selection tweaked the structure of some self-replicating acids over the course of billions of years until they were smart enough to examine the world around them and question their own perceptions of it. The consciousness we exhibit is not essential to this; the only goal of our design is reproduction. Even if you don't think a computer can feel actual qualia, a computer can certainly make decisions to optimize survival and reproduction, which is all we actually need to do. Why would evolution give us access to actual qualia when we don't need them? Why aren't we simply biological computers? As our thought experiment earlier suggests, however, a biological computer would feel as if it had access to actual qualia even if it didn't. Perhaps evolution could construct a complex qualia-sensing interface using science that humanity hasn't even conceived of yet, but it could also construct a simple biological computer that is convinced it feels actual qualia even though it doesn't. Both would work equally well. Even if actual qualia do exist, Occam's razor suggests we don't have access to them.
reply