Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Is it always possible to find a mapping under which the iron bar seems to run the program consciousness.exe? That is not obvious.

Let's say the Turing machine has states T, and the iron bar has states B. The program consciousness.exe is a dynamical system on T, called t: T->T, and the iron bar is a dynamical system on B, called b: B->B. Both of those functions/dynamical systems are given, because we observe them.

What the triviality argument says is that for a large class of "obviously non-concious b" there exists a (bijective) mapping F: T -> B such that t = F^-1 o b o F no matter what t is.

Is this true?

Consider the cardinality of the set of functions/programs on T. It is |T|^|T|. What is the cardinality of the set of bijective mappings between T and B? It is min(|T|!, |B|!). Much smaller! Therefore it is not at all obvious that Alice, Bob or Claire or any of their friends will find such a mapping, even if the methodically try all of them.

This seems to me an issue with the presented argument.

Although personally I still think consciousness is a physical phenomenon we just have not mapped out yet. Like magnetism or radioactivity before they were properly explained.



sort by: page size:

Scott Aaronson (if memory serves right) had an interesting take on this. He framed his thought in terms of the Turing test, but the argument would apply equally well to the mapped iron bar:

In theory, we could pass any Turing test of finite duration (eg an hour or less) and run in a chat room with finite bandwidth with a giant lookup table. Just look up the entire past of the conversation to see what answer should be given. The lookup can be implemented trivially on any Turing machine (and doesn't even need the machine's full power).

Now there's multiple directions you could take this. Here's Scott with one of them:

> Briefly, Searle proposed a thought experiment—the details don’t concern us here—purporting to show that a computer program could pass the Turing Test, even though the program manifestly lacked anything that a reasonable person would call “intelligence” or “understanding.” (Indeed, Searle argues that no physical system can understand anything “purely by virtue of” the computations that it implements.) In response, many critics said that Searle’s argument was deeply misleading, because it implicitly encouraged us to imagine a computer program that was simplistic in its internal operations—something like the giant lookup table described in Section 4.1. And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway: something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and all sorts of other internal furniture that we would expect to find in a mind.

> Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient. Furthermore, the latter program would be sentient because it was polynomially-bounded.

See 'The Lookup-Table Argument' in https://www.scottaaronson.com/papers/philos.pdf for the rest. It's all very fascinating.


The physical process is simply that the atoms in the bar of iron start in one Turing machine state, obey the laws of physics, and end up in another Turing machine state.

I don't see how this addresses the key point -- those aren't random distinct states; there's an explicit description of how one state transitions to the next. That's part of the definition of the Turing machine!

If you could describe a mapping from the state of your iron bar to a Turing machine's state, such that the bar correctly transitions from one machine state to the next, then yes, it is a computer! Although as others have noticed, you'd need a ludicrously large and complex mapping; the infinitesimal tail would be wagging the infinite dog. At that point you could certainly argue that the computation is really happening in the mapping itself.


> there's an explicit description of how one state transitions to the next. That's part of the definition of the Turing machine!

A Turing machine does indeed describe which states the machine transitions between (by its definition). So, it might say that if it reads a "1" and is in state "A", transition to state "B".

However, crucially, a Turing machine does not specify the physical mechanism by which this state transition occurs. You just have to enumerate the rules for state transitions. The definition of a Turing machine says nothing about whether or not these states are voltages, the locations of rocks, or the magnetic moments of iron atoms. Nor does it say whether the transition is being implemented by a CPU or a human being manipulating rocks.

So the only rigorous way you can make a connection to a physical computer is to just identify a mapping from physical states to Turing machine states, and observe whether or not it performs all the state transitions correctly. If it reads a "1" and is in state "A", did it go to state "B"? If it did, it is behaving as a Turing machine. If it did not, it is not behaving as a Turing machine.

> At that point you could certainly argue that the computation is really happening in the mapping itself.

The mapping is fixed, so I don't see how it can do any computation. Others have objected to the fact that the mapping in the iron bar is obviously extremely complicated, but this strikes me as ultimately being an objection that this doesn't "feel" like computing.

But as a counterexample, you can do computations under homomorphic encryption. Suppose you have encrypted a computer program that sorts a list using homomorphic encryption. You hand it to me to run on my CPU. When I observe the voltages of the CPU, they will appear completely random to me. Only if I have the key can I provide the correct (complicated) mapping from physical states to Turing machine states and interpret what is going on on the CPU. But if we reject "complicated" mappings as not being "real" computations, then you're forced to say that computation under homomorphic encryption isn't "really" computation --- even though you've handed me a program to sort a list, it ran on my machine, and it handed you back a sorted list.


I think the problem stem from the author’s definition of computation. He has defined it around the state of a Turing machine, which is vastly insufficient.

The important part of a Turing machine is that of the behavior: given a certain state, the next state is a result of the current instruction on the tape.

It is easy to imagine identifying some mapping from a Turing machine state and an iron bar. However, mapping the next state to a valid state transition of the same Turing machine seems quite unlikely, graduating to impossible, as the Turing machine executed.

Said another way, a program is not its memory dump.


I think you’re skipping over one or two crucial steps there; let me try to put my finger on it.

The key feature of a universal Turing machine is that it can perform any computation, if fed the right program.

In your thought experiment where you’re mapping the internal state of an iron bar, to claim it’s a valid mapping to a Turing machine, it must be possible to feed in any program, after the mapping has been defined. Otherwise it’s not a universal machine.

You are mapping a well-defined structure into essentially a block of random numbers -- a one-time pad. You can only perform “any” computation that way if you map the full execution trace of the machine onto the random block. But that way, the computation occurs before the mapping is defined. That’s what I meant by “the computation is really happening in the mapping itself”.

The homomorphic encryption scheme is different, because while it looks random, it’s generated by a well-defined and reversible process. So I can use a mapping of bounded complexity to inject any program, and in principle to step through its execution.


>Continuing this, we can imagine an enormous crowd of observers staring at this iron bar, each using a unique encoding of atomic states to bits. With a sufficiently large number of observers, the encodings of all possible Turing machines of the given size are represented.

Yes, but only momentarily. It does not follow that there’s an encoding where the iron bar evolves as if running a given Turing machine.


The physical process is simply that the atoms in the bar of iron start in one Turing machine state, obey the laws of physics, and end up in another Turing machine state. But this is also true of any other physical computer.

It sounds like your objection is that this is not a computer because it's not reproducible --- we just got lucky that the physical states mapped onto the appropriate Turing machine states one time. But then how deterministic does the physical system need to be to be considered a computer? It can't be 100% because no real-world physical computer is 100% deterministic. Is it a computer if the mapping is correct 50% of the time? 90%? 1%? I can't see any principled reason to choose any particular value.


> "I am not assuming anything other that what you wrote"

Yes you are:

> "it works just as well to consider humans to be char programs which map strings to strings."

Before proceeding, I will assume you're familiar with the distinction between countable and uncountable infinities, and the Cantor Diagonalization proof [0]. This argument is simply a special case of that.

Mapping strings to strings is an uncountable infinity (it's the powerset of countable infinities). All possible machine programs are a countable infinity (since any given source program is finite). The argument "give me an output other than what any of these programs would give" mimics the diagonalization step -- it demonstrates that, whatever is in the infinite list of possible AI programs, you can construct string mappings that none of the AI programs would have constructed.

Yet a strong AI should be able to construct any string mapping whatsoever. Thus, "strong AI" and "running finite source code" are necessarily incompatible. (We do not know whether human programming is enumerable; thus, we cannot draw the analogy you attempted to draw.)

[0] http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument


In the case of the hot iron bar, obviously there is no deterministic process that takes us from one state to the next. We just happen to get lucky and see that it randomly fluctuates from one state to the next state correctly.

The trouble is that formal computation theory does not take into consideration the physical process by which the computer moves from one state to the next. That's what makes it so general. We can make computers out of transistors or vacuum tubes or Legos equally well and can be assured that whatever we make our computer out of the computational complexity of bubble sort is O(n^2).

Requiring specific physical processes for computation would invalidate computation theory in a very fundamental kind of way.


Yes, I'm restating (1). That's why my remark was introduced with the phrase "The argument was [...]". In particular, I was trying to rationalise the dumb "iron bar" example as a justification for (1).

I don't think you addressed anything; you just flatly denied (1), without offering any reasoning. In particular, you don't seem to have tried to understand the author's attempt to justify (1) using iron bars.

For my part, I'm not sure about (1). As far as I can see, there's a steady flow of people discovering that some X is a Turing-complete language, when X wasn't actually even meant to be a language at all. So it seems that some configuration of stuff is a computer IFF (a) it is capable of doing computation, and (b) someone is willing to consider it as a computer. Four odd socks could be considered a computer, by someone who was using them as a 4-bit adder.


First of all allow me to apologize for earlier summarily dismissing your argument (quick mind). Which I did because I thought you did not have an airtight proof. I still you do not but now see that it is a very good argument which I dismissed as out of hand while misconstruing your meaning of enumeration. Sorry.

Now let me see if I get you right. You are saying there exists a [mapping] program that a human could construct and observe its output but that this AI program could not because it would be giving something other than an output from a possible program. Which is a contradiction. Makes sense.

Furthermore you are saying that given this same program, a human could know the output and give another response not in the list.

The guarantee that it is not an output from possible programs part is an assumption.

Since it is also an assumption to take for granted that even though the possible programs is countable it is a practically enumerable infinity.

It is also an assumption that there is no 1:1 mapping from the list of programs to something that could perfectly emulate a human.

It is also an assumption to believe that a human could not just construct such a program but construct one that gave an output*

It is also does not follow that just because the AI could theoretically be stymied by this conundrum, the set of its computations would not include those that utterly outperformed humanity in all their cognitive useful tasks.

*There is also the problem that not all those programs will halt. So if a human can always give an output then it is an exotic entity at least equal to a hypercomputer. I rank that with time travel in terms of possibility.


Agreed. A Turing machine operates in a discrete countable state space, whereas the human brain requires real numbers for a complete state description. Cantor showed (with the diagonal argument) that real numbers are uncountable - so there are (infinitely many!) real numbers that are unreachable using a Turing machine. My suspicion is that AGI, consciousness, perhaps even the supernatural soul you allude to, can be found only in this unreachable state space. There exists a Cardinality Barrier!

Something like a Turing machine, with its finite states, deterministic transitions, and simple tape seems so mechanical, so physical. So predictable.

Sure, there's the halting problem, but that relies on a paradox.

Surely something as artifical as self-referential statements would seem a bit pointless.

And yet, with the help of that principle, it turns out we can write simple, mechanical programs where if the input is < N we can calculate the answer, and if the input is > N we can't figure out what the program will do (using standard mathematical thinking).

For some N we can get creative, but for a yet bigger N, we may well find ourselves unable to be creative enough to work out if we could ever work out the the answer.

https://en.wikipedia.org/wiki/Busy_beaver#Non-computability

"In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which S(n) is unprovable in ZFC. To do so they constructed a 7910-state[2] Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property).[3][4] This was later reduced to 1919 states, with the dependency on the stationary Ramsey property eliminated.[5][6]"


Well, the proof is perfectly valid, as on some abstract level a computer == Turing machine == something you can reason about. If you can prove that your program works in the required way, then your proof holds.

However, brute force solutions are ugly and shed little (if any) new light on the problem, and in the words of G.H.Hardy - "There is no permanent place in the world for ugly mathematics". With things like this I just can't help but think that an elegant and enlightening proof will unexpectedly pop up some time in the distant future when we're all off researching some sort of new-age hypermaths or something...

... well, I suppose a man's got to dream, right?


That doesn't sound entirely trivial? Like a physical quine.

My favorite new tidbit is that if you believe Turing machines can result in consciousness, you believe that certain integers are conscious when executed (lambda calculus).

Sure, but their states aren't quite what you'd consider to be states for an abstract Turing machine. There's a CPU state that also happens to set some pixels on a display to a certain value, for example.

1: they aren't. They don't need to be, and they cannot be.

Simulate the first step of the first machine, then the first step of the second machine, then the second step of the first machine, then the first step of the third machine, then the second of the second, then the third of the first, then the first of the fourth, and so on. Every step of every program/machine is eventually reached.

For the same reason that the pairs of integers can be put into bijection with the integers.

2) that is not a problem. The first step of that is not to run a step of a program it simulates. The "simulating a single step of a given machine" is not a single step. It is many steps. As such, there is no such recursive problem like you describe. Simulating a single step of a machine always finishes, regardless of what that step "represents". Even if the step being simulated is part of some simulator machine, simulating it is still done in the same way, and doesn't take longer.

3) Any programs for any computer we have can be simulated by a Turing machine. They are quite general.

You might claim that some physical process is not computable, but that has not been demonstrated for any physical process (well, except for maybe consciousness, but that is contentious), and most ideas of how physics works are computable, so there would be a significant burden of proof.

So, it seems like any program we can run can be simulated by a Turing machine.


>But computing machines aren't unpredictable. They are very predictable.

Are they though? A 5-state Turing machine might seem trivial, but there exist programs in it, that we don't know if they ever halt or not [1]. Their behavior is totally non-deterministic for our understanding. There are also minimalistic cellular automata that produce completely unpredictable patterns. [2]

[1] https://en.wikipedia.org/wiki/Busy_beaver#Known_values_for_....

[2] https://en.wikipedia.org/wiki/Rule_110

next

Legal | privacy