This seems to be attempting to "prove" that if you regard consciousness as "containing the bits of a specific program", you could also see that program in random data by interpreting that random data by effectively applying a one-time pad to it (from which you can indeed produce any possible interpretation of data), which it treats as a proof by contradiction.
And leaving that aside, while the assertion is that consciousness is not "computation", the reasoning seems focused on the storage of bits rather than on the execution of an actual program defined by those bits that goes from one state to another in a meaningful fashion. Storing a program and running a program are two different things.
Ultimately, this article seems to have started out with an assertion to support, and then tried (unsuccessfully) to turn that assertion into something more than an assertion.
I don’t really get the argument either. The author only seems to demonstrate that different observers can make different conclusions from different observations of the same phenomena. The author requires consciousness to be observer-independent, but surely that doesn’t also require that all observers are able to correctly conclude whether they’re observing a conscious entity at any given time.
Yeah, the author claims that consciousness is observer independent, but then creates systems that depend on an "observer" (or rather, an interpreter) to make the system turing complete. The bar of iron isn't conscious or turing-complete just because one person can interpret it so. The bar of iron + the interpreter form a complete system. And in fact the bar of iron is really not doing anything in this case, it's the interpreter doing all the work, so it's more like saying "this human interpreter is conscious". Not a very insightful conclusion.
The experiment is okay, it's actually a special case of a concept explored in Egan's Permutation City and your observation about what's really doing the work applies there too. Except story goes in unsettling directions by really taking noise generator aspect seriously. Things get interesting when sections of patterns become self interpreting.
A similar thing could be done for brains: record with necessary accuracy, all voltages, membrane potentials and any key biochemical concentrations. This will take a finite number of bits. Look for a decoding of recorded data from heated iron bar, convert those readings, instead of using original, and play that back into state clamped brain. Does being able to read conscious state into brains from hot iron invalidate them too?
Another relevant story is Wang's Carpets. We might look at some alien moss or fungal mat and think it primitive. But later our technologies and knowledge advance to the point we can now see it's running a complex computation with intelligent agents inhabiting. Did the creatures not exist until we could decode them?
One of its pivotal flaws is:
> Since there is no definition of computation without reference to an external observer, a system in isolation just cannot compute, which suggests that a conscious being cannot compute.
This is an assumption they do not try to and cannot prove. It's also what much of their argument rests upon.
Related ideas are subjectivity of emergence or what counts as an observer for Wigner's Friend.
You make a good point about discarding the source of random noise that the one-time pad is being applied to, and just focusing on the thing that's generating that one-time pad.
But I still don't know where to draw the line and how to justify it.
If that source of random noise mapped to a Turing machine running consciousness.exe for a short period of time by sheer chance without a one-time pad being applied to it by an external observer, would that classify? If we observed that this mapping held true by sheer chance as we observed additional bits in this random noise source, what about then? Does it make a difference that it's a random noise source that happens to be corresponding to a Turing machine for a period of time, and not an "actual" computer? And if that matters, what about the point that actual computers aren't perfectly deterministic, either?
This seems to be attempting to "prove" that if you regard consciousness as "containing the bits of a specific program", you could also see that program in random data by interpreting that random data by effectively applying a one-time pad to it (from which you can indeed produce any possible interpretation of data), which it treats as a proof by contradiction.
And leaving that aside, while the assertion is that consciousness is not "computation", the reasoning seems focused on the storage of bits rather than on the execution of an actual program defined by those bits that goes from one state to another in a meaningful fashion. Storing a program and running a program are two different things.
Ultimately, this article seems to have started out with an assertion to support, and then tried (unsuccessfully) to turn that assertion into something more than an assertion.
reply