Yes they could, why not? From the perspective of the consciousness being created from the rocks computing, it doesn't necessarily need to know what the material is of its hardware. ie let's say we were in a simulation and the layer above us were using rocks to compute, we wouldn't even know that that is what they are using.
Another flaw is that it assumes that the hardware component, .e.g. rocks, somehow represents the consciousness instead of the 'software'. For example, our brain is required for our own consciousness, but that is merely the hardware. And we see the hardware of the brain existing without consciousness when people fall into a coma. The example with the rocks is simply the hardware. Furthermore, the number of rocks needed to produce consciousness in this manner would likely cause them to coalesce into a planet under the influence of gravity from which life and consciousness itself might arise! After all, Earth is a collection of rocks that gave rise to consciousness.
I think computers could be conscious (whatever that means) if they modelled living minds to a sufficiently high degree of precision. I do not see why the physical substrate would matter.
Natural formation rock consciousness both lacks evidence and is absurd, as there isn’t even a coherent reason to think it might or could.
Your computer processes information, and can be configured to process information about itself. At least the possibility and potential utility exists that conscious computers may one day exist.
This is trivially true as you state it; that's why I added the qualifier "in appropriate ways". Not all interactions will produce consciousness. One obvious difference between us and your hypothetical "rockputer" is that the "rockputer" can't change its behavior based on its inputs in a way that improves its chances of survival; rocks simply aren't built that way. Neither are star systems or galaxies. But we are.
Exactly, and even if nobody has ever built it or written a formal proof that it would work, you can still imagine how it would work and prove it to your own mind.
(This would work for a purely mechanical computer made out of rocks, too.)
Personally, with enough knowledge about how modern AI works, and a reasonable understanding of the neuroscience literature, I feel there is no fundamental mystery to consciousness, only awe at the level of complexity harnessed.
I have an inherent problem with these discussions: I'm not convinced a simulation can be conscious.
There's an xkcd comic titled "A Bunch of Rocks" at https://www.xkcd.com/505/. Could dropping rocks in the sand to simulate a turing machine produce a conscious being? If not, why can a computer?
I don't get this either. I used to think about these experiments and my conclusions are completely opposite. Any physical medium that runs the right program would be conscious. And the right program is probably a very broad category.
>It already seems implausible to me that a vast desert of rocks being manipulated into various patterns is conscious. What exactly is conscious here? What happens if I accidentally kick a rock to the side — have I killed whatever ghostly being inhabits these rocks?
If you kick a rock to the side it's probably analogous to making someone's neuron misfire. If the pebble computer is as sturdy as a human brain then there would be probably no noticeable effect.
Software can be represented in any arbitrary way (e.g. in a book) and computations can be carried out from the software instructions in any arbitrary way (e.g. by arranging rocks in certain patterns). If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.
I think this idea is absurd. I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures. As of today we have no idea what properties such physical structures must have. It follows that computers are no more likely to become conscious than e.g. washing machines.
I can accept (and to be honest even like) the idea, that consciousness somehow emerges from the complex structures in an animal brain, that there is no soul, no other planes of reality, no special quantum phenomena needed, etc.
Maybe we could create a synthetic artificial conscious mind. At worst we could simulate a full human brain at whatever level is necessary. I can accept that.
What's crazy to me is the following: It's not the computer that's conscious. Instead, the computation itself is conscious. And the computation is obviously matter-independent. As a thought experiment it would be possible to compute it on paper and those pen and paper calculations would be conscious. Or pebbles in a desert XKCD style.
The rockputer comprises both the rocks and the mechanisms for moving the rocks in response to input. If the rock moving mechanism is structured properly, then the rock movement patterns could adapt to changes in the inputs to the overall rockputer system.
The level of complexity of the "passive" components of the system (i.e. the rocks) is irrelevant to whether or not the system can effect conscious-seeming behaviour when acting dynamically. Analogously, the underlying components of people, i.e. atoms, are clearly quite dumb on their own. When those atoms are allowed to evolve collectively over time, according to dynamics dictated by basic physical laws, conscious-seeming behaviour magically appears.
You can't deny the possibility of a conscious rockputer just by considering properties of the rocks.
I love the bunch of rocks example and think about it a lot.
If a simulation cannot produce consciousness, what is special about life that makes it unrepresentable with physics? Where is the boundary between a conscious system and a non-conscious one?
Conscious minds certainly seem bound by their physicality, you can figure out how feelings and experiences are represented molecularly and electrochemically and affect those feelings and experiences experimentally. So far we've not found anything that cannot be broken down into physics (not to say we won't), but if we assume there's nothing else _but_ physics – then I think we're left with:
- The physics consciousness relies upon is fundamentally uncomputable
- Consciousness _is_ computable and all the weird consequences that follow (like sand computers) are true
I lean towards the second option, but I struggle to understand it in any intuitive sense.
What are your thoughts on why a simulation cannot be conscious?
If electrons are the lowest level of granularity when it comes to determining state in computers then maybe it is the same in the mind?
I imagine a computer made of silicon with the trillions of grooves that the mind has in biological form would find itself to have a conscious just the same. Remove the silicon from the computer or biomass from the brain leaving just electrons behind and you would have a network I would argue gives rise to the seat of consciousness.
It probably would matter what type of computations it would be running, no? I think if a silicon computer ran the same exact computations as a conscious brain it would be conscious. If brain tissue ran some simple algorithm for playing doom or predicting the next word, it wouldn't be conscious
I didn't mean that brain made up of silicon cannot be conscious. We are trying to simulate whatever we have learned about the brain. And that understanding is not complete. So implementing that in silicon (circuit) will not be conscious until we figure how to implement it.
> the "rockputer" can't change its behavior based on its inputs in a way that improves its chances of survival
Yes it can. Some natural events, for example a flood or an earthquake, can destroy parts of the rockputer. It is therefore important for it to store the various parts of itself strategically. It shouldn't put its vital parts near the coast, or a tsunami may kill it. It should store its own consciousness in a robust way, so that it can recover from an earthquake. It's probably too slow to actually see either of them coming, but it can certainly prepare itself.
Or imagine you build two rockputers, one with black stones, another with white stones, and you have rules to remove stones when both rockputers try to expand into the same territory, a bit like in the game of Go. Then one can kill the other.
Star systems interact with each other through gravity, so you could conceive of them as some kind of gargantuan atoms, capable of making complex structures, including conscious ones. Granted, there doesn't seem to be an equivalent of the other forces at that scale, so probably it wouldn't work, but you see what I mean.
I think the author is misunderstanding the XKCD comic about rocks [1] (or maybe I am). Just because it's possible to run a simulation of the universe on rocks, doesn't mean that rocks are conscious or turing complete. You can't forget about the person who is manipulating the rocks. The system as a whole is turing complete. Likewise with the bar of iron example that the author gave, you can't forget about the person interpreting the atoms in the bar of iron. The system as a whole is turing complete (and also naturally conscious, because the person doing the interpretation is conscious).
And there is nothing physical necessary to represent such systems. You can simulate turing complete systems inside turing complete systems [2]. So I don't see why consciousness has to be a "physical phenomenon" as the author claims.
I don’t follow you. I’m saying we haven’t discovered any inorganic consciousness, so it isn’t a given that we will be able to create it with digital computers. Not sure how that breaks the laws of physics.
> If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.
The word "alone" here is not correct. Claiming that software on a computer can produce feelings is not the same as claiming that software on a computer can produce feeling without having to interact with anything else. The latter claim is obviously absurd; organic life forms like us don't produce feelings without interacting with anything else, so why should we expect software on a computer to do so? But ruling out the latter claim still leaves open the possibility that software on a computer could produce feelings if it were interacting with the rest of the world in appropriate ways.
> I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures.
Rocks in a certain pattern are physical structures. So this doesn't rule out rocks in a certain pattern producing feelings.
> As of today we have no idea what properties such physical structures must have.
I don't think we're that badly off. We expect embodied brains of sufficient size and complexity to produce feelings, but we don't expect three pounds of jello to produce feelings. So clearly we have some knowledge of the properties that physical structures must have to produce feelings. They must have some level of complexity and heterogeneity; they must be able to store information in some way that is recoverable (three pounds of jello can "store information" in some sense, but there's no way to recover it in any useful sense); and they must be interacting with the rest of the world in appropriate ways. There's no reason why we couldn't eventually build computers with these properties.
reply