The iron bar argument is terrible for exactly the reason you say. The atoms in the iron bar would have to evolve following the rules of a Turing machine and they won't. So you'd have to define a mapping which varies over time. There's nothing stopping us doing that but I can do the same with a pumpkin and my brain. Therefore I'm not conscious. Sad!
I think you are conscious, and so is the combination of iron bar and crazy complex mapping. In the latter case, the crazy complex mapping does all the work and in some sense _is_ conscious.
Right, if you were to compute the mapping in real time on a computer, that would be conscious but it's really got nothing to do with the actual iron bar.
Scott Aaronson (if memory serves right) had an interesting take on this. He framed his thought in terms of the Turing test, but the argument would apply equally well to the mapped iron bar:
In theory, we could pass any Turing test of finite duration (eg an hour or less) and run in a chat room with finite bandwidth with a giant lookup table. Just look up the entire past of the conversation to see what answer should be given. The lookup can be implemented trivially on any Turing machine (and doesn't even need the machine's full power).
Now there's multiple directions you could take this. Here's Scott with one of them:
> Briefly, Searle proposed a thought experiment—the details don’t concern us here—purporting to show that a computer program could pass the Turing Test, even though the program manifestly lacked anything that a reasonable person would call “intelligence” or “understanding.” (Indeed, Searle argues that no physical system can understand anything “purely by virtue of” the computations that it implements.) In response, many critics said that Searle’s argument was deeply
misleading, because it implicitly encouraged us to imagine a computer program that was simplistic in its internal operations—something like the giant lookup table described in Section 4.1. And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway:
something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and
all sorts of other internal furniture that we would expect to find in a mind.
> Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would
not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient. Furthermore, the latter program would be sentient because it was polynomially-bounded.
We only ever have finite conversations in real life, and strictly speaking big-O notation requires arbitrary input sizes.
So any application of big-O notation to this would require some generalisation and abuse of notation. It's a bit hard to formally argue which abuse is The Right One.
Today you don't even need to do any look-up table, most short GPT-3-generated dialogues seem perfectly fine from a Turing-test perspective. That form of the Turing test stopped being useful long ago..
Still, the Turing test was never meant as a measure of "consciousness", right?
Turing's original test was meant as an adversarial, interactive test.
GPT-3 can generate natural looking text and even dialogues, if you don't press it too hard. But a motivated adversary can tell GPT-3 from a real human pretty quickly still.
I've never been able to see why one would give the lookup table any credence. It's just kicking the can down the road one step in terms of abstraction.
The second you assert that the lookup table can pass a turing test with, eg. a gigabyte of exchange, then the table of every single one gigabyte number in it becomes your state space and program, the page number becomes the state, and you've got just as much complexity as any other simulation with one gigabyte of state. You haven't changed the parameters of the problem at all.
Of course, the reduction only works one way. The universe-dwarfing lookup table could do things that are much more impressive than what your brain does.
The iron bar is obviously not a Turing machine and doesn't run programs, that argument is total bonkers, the author assumes it's correct, but it's not correct. With the same success you can look at a turned off computer, assign interpretations to its atoms and thus imagine it runs linux, but it's just a fantasy.
I find the iron bar argument singularly unconvincing; but whether or not it simulates some Turing Machine is neither here nor there, because the Turing Machine model is orthogonal to consciousness.
An encoding which varies over time, dependent on previous states does not seem impossible. I don't think this is disqualifying to say that a sufficiently complex system cannot then simulate an abstract machine.
It does not make the article argument valid however, just that the iron bar argument is still worth thinking about.
The article says that consciousness cannot be computing because it requires an observer to derive meaning. Nothing precludes sufficiently complex encodings to themselves be accurate simulations of abstract machines. I think such class of simulations being able to support themselves might be the nature of consciousness. Some physical systems might be better suited to allow the emergence of such recursive simulations.
reply