Another flaw is that it assumes that the hardware component, .e.g. rocks, somehow represents the consciousness instead of the 'software'. For example, our brain is required for our own consciousness, but that is merely the hardware. And we see the hardware of the brain existing without consciousness when people fall into a coma. The example with the rocks is simply the hardware. Furthermore, the number of rocks needed to produce consciousness in this manner would likely cause them to coalesce into a planet under the influence of gravity from which life and consciousness itself might arise! After all, Earth is a collection of rocks that gave rise to consciousness.
> If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.
The word "alone" here is not correct. Claiming that software on a computer can produce feelings is not the same as claiming that software on a computer can produce feeling without having to interact with anything else. The latter claim is obviously absurd; organic life forms like us don't produce feelings without interacting with anything else, so why should we expect software on a computer to do so? But ruling out the latter claim still leaves open the possibility that software on a computer could produce feelings if it were interacting with the rest of the world in appropriate ways.
> I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures.
Rocks in a certain pattern are physical structures. So this doesn't rule out rocks in a certain pattern producing feelings.
> As of today we have no idea what properties such physical structures must have.
I don't think we're that badly off. We expect embodied brains of sufficient size and complexity to produce feelings, but we don't expect three pounds of jello to produce feelings. So clearly we have some knowledge of the properties that physical structures must have to produce feelings. They must have some level of complexity and heterogeneity; they must be able to store information in some way that is recoverable (three pounds of jello can "store information" in some sense, but there's no way to recover it in any useful sense); and they must be interacting with the rest of the world in appropriate ways. There's no reason why we couldn't eventually build computers with these properties.
What definition of consciousness are you basing this assertion on?
Or are you saying this axiomatically? "Consciousness is something that software can never be."
If we accept that consciousness in humans is some function of the working of the brain (perhaps with sensory organs too), then that entire hardware (wetware) could in theory be simulated, at which point why wouldn't the simulation (assuming it's captured all the nuances of the physical implementation) be conscious?
Of course, if you believe that consciousness in humans is something that requires "something else" e.g. a "spirit" or "soul", then your statement may be true (though even then it requires a theory of spirit/soul that such a substance cannot be correlated with any other hardware, and can only be associated with human brains).
Question for you. Do you believe that any other living beings e.g. dogs, possess consciousness or a level of consciousness?
Yes they could, why not? From the perspective of the consciousness being created from the rocks computing, it doesn't necessarily need to know what the material is of its hardware. ie let's say we were in a simulation and the layer above us were using rocks to compute, we wouldn't even know that that is what they are using.
I think it’s important to note that the article isn’t saying that if you build a brain-like thing, it can’t be conscious. It’s arguing that if you simulate a brain-like thing purely in software it can’t be conscious. I’m not saying one argument has more merit than the other (not that anyone is going to be able to prove anything is conscious either way).
The flaw in the argument is that the author jumps from a conjecture that it may be possible to achieve consciousness via mechanical computation, to the assumption that all mechanical computation is consciousness. I don't think that necessarily follows at all.
Natural formation rock consciousness both lacks evidence and is absurd, as there isn’t even a coherent reason to think it might or could.
Your computer processes information, and can be configured to process information about itself. At least the possibility and potential utility exists that conscious computers may one day exist.
Software can be represented in any arbitrary way (e.g. in a book) and computations can be carried out from the software instructions in any arbitrary way (e.g. by arranging rocks in certain patterns). If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.
I think this idea is absurd. I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures. As of today we have no idea what properties such physical structures must have. It follows that computers are no more likely to become conscious than e.g. washing machines.
I love the bunch of rocks example and think about it a lot.
If a simulation cannot produce consciousness, what is special about life that makes it unrepresentable with physics? Where is the boundary between a conscious system and a non-conscious one?
Conscious minds certainly seem bound by their physicality, you can figure out how feelings and experiences are represented molecularly and electrochemically and affect those feelings and experiences experimentally. So far we've not found anything that cannot be broken down into physics (not to say we won't), but if we assume there's nothing else _but_ physics – then I think we're left with:
- The physics consciousness relies upon is fundamentally uncomputable
- Consciousness _is_ computable and all the weird consequences that follow (like sand computers) are true
I lean towards the second option, but I struggle to understand it in any intuitive sense.
What are your thoughts on why a simulation cannot be conscious?
This is trivially true as you state it; that's why I added the qualifier "in appropriate ways". Not all interactions will produce consciousness. One obvious difference between us and your hypothetical "rockputer" is that the "rockputer" can't change its behavior based on its inputs in a way that improves its chances of survival; rocks simply aren't built that way. Neither are star systems or galaxies. But we are.
> It's the distinction between programming consciousness and physically reproducing it.
Except that there is none: consciousness is "programmed" into our brains (by hard-wiring + social experiences); artificial intelligence is "physically reproduced" as a mesh of transistors and higher-level structures built from them - logic cells, LUTs, FPGAs, PSMs, CPUs, HPC clusters, etc.).
(Partially relying on software is not an issue: even a brain completely simulated in software could, in principle, exhibit consciousness just as well as a "wetware" one can.)
I don't get this either. I used to think about these experiments and my conclusions are completely opposite. Any physical medium that runs the right program would be conscious. And the right program is probably a very broad category.
>It already seems implausible to me that a vast desert of rocks being manipulated into various patterns is conscious. What exactly is conscious here? What happens if I accidentally kick a rock to the side — have I killed whatever ghostly being inhabits these rocks?
If you kick a rock to the side it's probably analogous to making someone's neuron misfire. If the pebble computer is as sturdy as a human brain then there would be probably no noticeable effect.
> If it's the behavior of neurons that's special, then consciousness is no longer identical with neurons, it's identical with any physical system that functions the same way.
Sure, I don't think brain uploads are impossible. If we were having this conversation in 100 years, I might be saying that "red experience" is the result of neuronal/circuit activity. But functions the same way is the relevant part - a rock does not function the same way. I don't think either of us expect a human to act the same way if their brain is replaced by a rock. On some level, we both know that the brain is fundamentally different from a rock.
> And then you have the possibility of very counterintuitive arrangements, like a billion Chinese instantiating a blue experience, or a meteor shower simulating experiences.
I think we can both agree that a computer is merely a physical object. That doesn't mean that surfing the web or playing a video game on "a billion Chinese"/"a meteor shower" is any less counterintuitive. Imagining any incredibly complex system being completely simulated by random physical phenomena is bizarre.
And the human brain is much, much more complex than a laptop. Really, go read up on it - 86 billion neurons, 100 trillion synapses, neurons firing 200 times a second. Consider the work it takes to simulate one second of brain activity (with far fewer neurons and synapses than a human brain)[1]:
> The simulation involved 1.73 billion virtual nerve cells connected by 10.4 trillion synapses and was run on Japan's K computer, which was ranked the fastest in the world in 2011.
> It took the Fujitsu-built K about 40 minutes to complete a simulation of one second of neuronal network activity in real time, according to Japanese research institute RIKEN, which runs the machine.
> The simulation harnessed the power of 82,944 processors on the K computer, which is now ranked fourth on the biannual international Top500 supercomputer standings (China's Tianhe-2 is the fastest now).
You're far more likely to see the dust in the air randomly play Casablanca for you from start to finish than you are for meteor showers to randomly start simulating the human brain. The human brain is complex. Really, really, really complex. So complex that weird emergent stuff like "red experience" happens. People have a hard time conceptualizing things when they become so vast; that's understandable. But there's no need to invent invisible qualia simply because we have a hard time understanding things on this scale.
> If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.
I don't think this is as absurd as it sounds. I think it was Dennett who said our intuition about consciousness is pretty sensitive to timing. The rock construction you describe would "think" a billion times slower than a human brain, and there is something unsettling or unintuitive about a consciousness that operates in slow motion. I would expect extremely fast-paced AI to think that the idea that human beings are conscious is similarly absurd.
Also, consciousness "feels" like it's ineffable, so it makes sense that we would have an inherent bias against understanding it as a process. There is something we see in our consciousness that we simply cannot wrap our minds around in any way (possibly because we're hallucinating it).
So yes, I would bite the bullet on this: consciousness could emerge from placing rocks in a certain pattern following instructions in a book. It would just be an excruciatingly "slow" consciousness.
Not saying you can't run any kind of software on turing complete hardware, but consciousness is far more that just software. The only proof of consciousness we have is that a brain is required, where software and hardware are the same thing. New thoughts are real physical connections in the brain.
> Isn't that awesome? But ... "the philosophical assumptions fail, and human immortality through uploading is fundamentally impossible".
To boil it down: They assume roughly that simulating the state and functionality of the neurons is sufficient to reproduce consciousness.
If there's something more to consciousness - say for example that consciousness requires the specific organisation of matter of the human brain - then uploading, at least into software, will fail.
I don’t follow you. I’m saying we haven’t discovered any inorganic consciousness, so it isn’t a given that we will be able to create it with digital computers. Not sure how that breaks the laws of physics.
The hardware isn't the point. A modern computer is probably sufficient to support at least some sort of consciousness with the right software, but it cannot be conscious while it's turned off. There's no process occurring that could implement consciousness. A language model is effectively turned off except during inference.
reply