I think this is pretty sloppy, at least in this statement of it.
Whether one or more observers can establish with confidence whether a system is running a particular computation is not the same thing as whether the system is running that computation. The whole section about finding a mapping between physical states and 'consciousness.exe' is exactly equally as valid for 'javac' or any other program -- but we do not conclude from this that this means that actually no physical system can run javac.
The author effectively makes a bait-and-switch between consciousness and the ability of external observers to identify consciousness by searching for a mapping from physical states to computation. From this view, I think we end up exactly where we were after Descartes: we can only firmly establish our own consciousness, even if consciousness is firmly computational.
The flaw in the argument is that the author jumps from a conjecture that it may be possible to achieve consciousness via mechanical computation, to the assumption that all mechanical computation is consciousness. I don't think that necessarily follows at all.
"[T]he only reason we call a box with a CPU in it a “computer” is because we happen to have a simple mapping between the voltage levels across different parts of the CPU to a set of bits we have defined, and when these voltages interact they do so according to the rules of a set of logical operations which we have also defined. But there is no meaning to the physical system apart from what we, as external observers, have imposed on it."
This might be an argument against a computer program with no inputs or outputs being conscious. But it is not at all an argument against a robot with a computer brain, hooked up to inputs and outputs in a way somewhat similar to a human brain, being conscious. In fact, one way of stating the position of physicalists about consciousness is simply that we humans are such robots! We have brains that compute things (granted, our brains do it with analog neurons, chemicals, etc. instead of digital circuits, but that doesn't mean what our brains are doing is not computation), but the things our brains compute have semantics because our brains are hooked up to inputs and outputs.
In other words, the whole article is attacking a straw man. The actual substantive point of the article is not that consciousness is not computation, but that for computation to produce consciousness, it has to have semantics--it has to be hooked up to iputs and outputs in a non-trivial way. Which is certainly not easy, but that doesn't mean it's impossible.
This seems to be attempting to "prove" that if you regard consciousness as "containing the bits of a specific program", you could also see that program in random data by interpreting that random data by effectively applying a one-time pad to it (from which you can indeed produce any possible interpretation of data), which it treats as a proof by contradiction.
And leaving that aside, while the assertion is that consciousness is not "computation", the reasoning seems focused on the storage of bits rather than on the execution of an actual program defined by those bits that goes from one state to another in a meaningful fashion. Storing a program and running a program are two different things.
Ultimately, this article seems to have started out with an assertion to support, and then tried (unsuccessfully) to turn that assertion into something more than an assertion.
The author confuses the "computational theory of consciousness" with consciousness arising in computers. They are not the same. Consciousness not being computational does not mean that consciousness can't arise in computers.
> The argument seems to be that beyond a certain level of complexity you'll somehow automatically get consciousness.
I agree with much of what you say but not with this statement. That would be a silly argument since it's very easy to imagine algorithms of arbitrary complexity that are trivially not conscious. A better argument for computationalism states that (1) consciousness could be an emergent property of certain algorithms when they are running on a computational device, and (2) among all known explanations of consciousness, a computational theory seems to be the overall best theory, especially if compatibility with contemporary physics is a goal.
The article lays out one incorrect standard argument against (1) that basically just says that it's hard to imagine (1) and therefore (1) is not possible. The Chinese room and the Chinese brain arguments do the same, and they are equally flawed. Just because something is hard to imagine or comprehend doesn't imply that it isn't the case. In fact, if consciousness is an emergent property of certain algorithms when they run, then it is clear that their workings are hard to understand. That's reasonably clear because otherwise we would already have found them.
Regarding your worry that we might not be able to detect consciousness: I agree with that but there is, interestingly, a loophole. At least in theory it could be possible that if computationalism is true, then we can determine that an algorithm produces consciousness by mere analytic insight. Again, this is hard to imagine, but it is not impossible. It seems more likely that (2) is the only route to go, that for some reason we lack the capacity to determine consciousness reliably by mere analysis, but we don't know.
(2) is the most controversial in the philosophy of mind. On the one hand, it is clearly inference to the best explanation, and there are various methodological concerns with such arguments. One might claim that they have no justificatory value on their own. On the other hand, the alternatives to computationalism really are way more mystical. The brain could be a hypercomputer. But hypercomputers can also compute, so it's just an extension of computationalism and it is not even fully clear yet whether and which types of hypercomputers are physically possible. Then there is Penrose's theory of quantum consciousness, which basically just attempts to explain one mysterious phenomenon by another mysterious phenomenon. At least it was designed as a falsifiable theory and therefore is scientific. Finally, we have all kinds of non-computationalism that are mystical, explain nothing, and lead to strange homunculus problems. The worst offender is classical dualism. Dualists reject physicalism and often incorrectly assume that computationalism presumes physicalism. Ironically, however, computationalism would also be the best theory of how the mind works if dualism was true. The dualist just adds to this various stipulations that are incompatible with contemporary physics.
> Because the experience is subjective, you can't just assume that behaviours that appear conscious are proof that consciousness exists.
That's only true from a very narrow scientific perspective. Psychology allows the use of introspective data, so from that perspective subjective reports about consciousness (or related feelings and states of mind) can be valid data. Using a reasonable definition based on introspection we can even determine different degrees of consciousness and study what's going on in the brain while they appear to be active. Typical examples: falling asleep, dreaming, sleep paralysis, research on anaesthetics and mind-altering drugs, various forms of physical brain defects, the study of coma patients, etc. In a nutshell, I don't really buy the "consciousness cannot be measured" argument. What is correct is that we cannot show conclusively that another person or machine is conscious, just like we cannot disprove solipsism. But this is best treated as an overly skeptic philosophical argument, and at best this would support the theory that consciousness is an illusion and we are nothing but unconscious robots. That theory is not very plausible either, so we should be ready to grant consciousness to others based on introspective data.
The author's "triviality argument" doesn't hold up:
> 1. To say that a physical system is a computer requires an external observer to map the physical states of that system onto the abstract states of a Turing machine.
> 2. Consciousness does not require an external observer to exist.
> 3. Therefore, consciousness cannot be reduced to computation.
A physical system may be a computer regardless of whether it is verified to be so by an external observer or not; it does not require an external observer at all.
The authors make the claim that "You are just an algorithm implemented on biological hardware."
This claim needs to be substantiated before anything that follows can be taken seriously.
Another underlying assumption needs to be proven: that our conscious experience is only due to computation and nothing else.
> it must be possible to feed in any program, after the mapping has been defined.
This is absolutely correct and gets to the heart of it. We define a mapping and we can imagine even setting some of the spins of the atoms in the bar to input a particular "program." Then we sit back and watch the spins randomly flip and see if they correspond to what a universal Turing machine would do.
The reason that a hot iron bar in practice is not a computer is that there is no way we can easily find the correct mapping before we observe the bar. The process of finding this mapping will take more work than the computation itself. (I think this is what you mean in saying "the computation is really happening in the mapping itself.") So for our purposes it's useless for performing any computations. Nevertheless, some mapping from the bar's states to a Turing machine executing the program we've given it exists.
This is why this is different for the case of consciousness. Because consciousness exists independent of whether or not we're aware of it, it doesn't matter whether or not we can find this mapping beforehand. It just matters that such a mapping exists.
It would be different if I made the claim that the iron bar is sorting a list. I might say, "there exists a mapping of the states of the iron bar to a Turing machine running quicksort. Therefore the iron bar is sorting a list." The appropriate response would be "So what? If I consider all random permutations of the list, obviously one of them will be sorted --- but how does that help me find it? It takes me the same amount of work to find this mapping as it does to sort the list."
But if we are to say that consciousness is fundamentally a computational phenomenon, it doesn't matter if you find the mapping or not --- it exists independent of you.
"Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no"
That's not clear at all. Where is the author coming to this conclusion from? Every single thing a computer does is the result of a automation, or of randomness that's outside of anyone's control. Just because a specific output was produced by a rube-goldberg machine, as opposed to a simple one, doesn't make the machine any more conscious.
I think it’s important to note that the article isn’t saying that if you build a brain-like thing, it can’t be conscious. It’s arguing that if you simulate a brain-like thing purely in software it can’t be conscious. I’m not saying one argument has more merit than the other (not that anyone is going to be able to prove anything is conscious either way).
Majority of the authors are from philosophy. I don't think this paper is valid at any point. Define consciousness based on turing machine then we talk.
This argument is a pretty egregious example of strawmanning. A sketch of the author's reasoning:
1. Consciousness is observer-independent, so if one person observes consciousness, then everyone must acknowledge it.
2. If there are infinite observers of a given computation, each with a distinct interpretation scheme (e.g. observer i flips every i-th bit), at least one will interpret that computation as conscious.
3. (1) and (2) imply all computations are conscious.
4. This is absurd.
5. Therefore, computation cannot equate to consciousness.
No one would claim that pi (the constant) is conscious because some sub-sequence will parse as Shakespeare, nor would they argue that an excerpt of Proust is conscious because it required intelligence to produce. Let's agree on some sensible preconditions for consciousness (entity recognition and some notion of memory, among others) before we start trying to argue by reductio.
I agree with the conclusion, but not the reasoning.
I agree that "consciousness is not computation", if we convert that to "qualia is not computation", as the author seems to do. I agree because I don't think qualia -- like what red looks like to me -- can be communicated, whereas all computation seems communicable by writing out the Turing Machine. Maybe someone will convince me otherwise by describing redness.
However, the "triviality argument" seems pretty poor to me:
> 1. To say that a physical system is a computer requires an external observer to map the physical states of that system onto the abstract states of a Turing machine.
Does it? If nobody maps the states of AlphaZero to abstract states, will it fail at chess?
> 2. Consciousness does not require an external observer to exist.
I agree that my consciousness is not dependent on other people's consciousness, but I have no reason to be sure my consciousness is not external to my brain. If we're in a simulation, then it seems like my consciousness is external to my brain.
> 3. Therefore, consciousness cannot be reduced to computation.
Again, I agree, but not because of 1. and 2.
To me, consciouness/qualia seems like a mysterious other-worldly phenomena that observes interesting computations, but doesn't affect them.. like God running a universe simulation, and then looking at one of the creatures in it, and basically making up "what it's like" to be that creature, by filling in some gaps which are not dictated by the simulation, like what redness is like.
In conclusion: I think computers can definitely be conscious, but at the same time, I think consciounsess/qualia is weird, and is not computation.
>No. But your experiment involves "you with a pencil and a paper" where "you" is conscious.
Ok if it was mechanical computer, or an electronic computer what difference would it make ?
At what point does the computer start to have an independent experience of reality ?
To be clear I should mention I am not against consciousness emerging from physics, I don't think it is very likely that it emerges from computation at the level of neural networks.
Primordial consciousness is not an intuitive thing to understand. I see a lot of hubris and arrogance from people who conflate consciousness with computation or intelligence.
Are are lots of examples of computation without consciousness. So there is not much reason to think this other than the fact that our own brains are both conscious and perform computation (and even then the vast majority of the computation happens at a sub-conscious level)
>It's not necessary to use organic particles. We will probably soon build the electronic computer which will, from the perspective of the humans communicating with him, indeed behave "as a conscious person": soon we won't need you to write the answer you wrote, the computer will be able to make even better one.
Just because humans believe something is conscious doesn't make it conscious. The hard problem of consciousness is understanding why we have any experience at all. Consciousness is the illuminating quality of our minds, computation causes permutations in the nature of this experience, however it doesn't follow that it is what creates this experience to begin with.
"The theory that consciousness is nothing but running the right kind of computer program is wrong. Computation alone is insufficient to produce consciousness."
This is your opinion. Not fact.
"So if we accept that qualia exists (which, after all, seems intuitively sensible), we are burdened with the apparently impossible task of explaining how consciousness can be generated by physical processes. This is the crux of the “hard problem of consciousness.”"
No reason we can't explain qualia with some component or aspect of the brain matter - perhaps just the neocortex doing work on itself. It is not really known at this point but no reason to think it cannot be known. The brain is extremely complex.
"Consciousness is observer independent"
Well yes you have a brain and it doesn't require other people's brains to function.
" If we decide that a machine is not a computer simply because it ever makes any errors, we will have to conclude that there are no computers at all in the real world. And if computers don’t exist, then consciousness cannot be computation."
This is straight up doodoo.
We made a bot with like a few dozen neurons simulating a human brain play some games far better than any humans ever will be able to. What makes you think an entire brain of these neurons is not capable of something as uncomplicated by comparison as your so-called "qualia".
At the core of this argument is like 'hey i don't want to believe that a simulation could produce the same thing I experience personally, so let me add an essays worth of sophistry to convince you to feel the same way' it's like religious or something. It is inherently unscientific.
yes. And yet here, for the first time, is evidence he might have been right. This is where we all look at our shoes and mutter something about burdens of proof.
Note that it is still possible to describe consciousness as a computer -- it just has to be a quantum one. I find a lot of us techies relax around strong materialist positions (Penrose, Churchland, Searle) once you have coerced your interlocutor into admitting that there is no way of proving that there isn't an abstract(able) substrate. We can probably all agree that my Core i7 is not a mind. The disagreement amounts to whether or not a computing substrate of which all the relevant components have tidy classical explanations can be a mind, and after reading this, my fence-sitting on that issue is done. Without a single example of a functioning classical-mechanical mind, and strong evidence that quantum effects are nurtured within the brain, it would appear the magical-thinking quantum-worshipping nonsense-spouting materialist weirdos have it this time. Dammit.
> consciousness is, at root, a physical phenomenon, not a purely computational phenomenon. Computation may be necessary to produce consciousness, but it cannot be sufficient.
What does "computational phenomenon" even mean?
I really don't see how any of the ideas presented in the text prove that "it cannot be sufficient", nor it's clear from the text what "physical phenomenon" author has in mind?
Brain is a network of 86 billion organic electro-chemical switches making 100+ trillion inter-connections, each either firing electrical impulses or not... so how it that not "computational" in it's nature? There're also chemical reactions going on that influence neurons and possibly some EM field interferences play roles, too, but that's all part of the "hardware"'s internal design.
Brain is, of course, not really modeled like general purpose programmable computers that we're used to today, more like old concept of specialized hardware machines and automata (or perhaps today's FPGA concept) - it comes with the programming implemented in the hardware itself. The obvious difference is that brain is incomprehensibly more complex than anything we can build, and also being made of living tissue capable of re-arranging and re-purposing tricks that no silicon based computer will ever be able to do - but in the end, it's still a "computational machine" processing inputs and internal states and generating some results based on that... how else to call it without going into the spiritual stuff?
If the human brain is conscious then obviously consciousness is in fact computable. I don't see how the authors can reconcile their findings with this.
Whether one or more observers can establish with confidence whether a system is running a particular computation is not the same thing as whether the system is running that computation. The whole section about finding a mapping between physical states and 'consciousness.exe' is exactly equally as valid for 'javac' or any other program -- but we do not conclude from this that this means that actually no physical system can run javac.
The author effectively makes a bait-and-switch between consciousness and the ability of external observers to identify consciousness by searching for a mapping from physical states to computation. From this view, I think we end up exactly where we were after Descartes: we can only firmly establish our own consciousness, even if consciousness is firmly computational.
reply