I've never been able to see why one would give the lookup table any credence. It's just kicking the can down the road one step in terms of abstraction.
The second you assert that the lookup table can pass a turing test with, eg. a gigabyte of exchange, then the table of every single one gigabyte number in it becomes your state space and program, the page number becomes the state, and you've got just as much complexity as any other simulation with one gigabyte of state. You haven't changed the parameters of the problem at all.
Hypothesizing about the lookup table is an irrelevant question, because no such thing can exist in the real universe. Not even in theory. Moreover, your lookup table contains some amount of information in it; either it was generated by a relatively small polynomial process, in which case for as large as the lookup table appears to be, it actually isn't, and is indistinguishable from simply using the program that was used to generate it, or it does indeed contain exponentially large amounts of information, in which case hypothesizing an exponentially large source of information for the mere purpose of passing a Turing test is a bizarre philosophical step to take. Where is this exponentially large source of information?
Recall that in information theory, being a bit sloppy but essentially accurate with the terms, the amount of information something has can be expressed as the smallest possible encoding of something. The entire Mandelbrot set, as gloriously complicated as it may look, actually contains virtually no information, a mere handful of bits, because it's all the result of a very simple equation. No matter how gloriously complicated your enormous hash table may look, if it was generated with some humanly-feasible program and nigh-infinite amounts of time, the information content of the entire table, no matter how large, is nothing more than the size of the program and perhaps a specification of how long you let it run.
Basically, the whole "big lookup table" has to have some sort of characteristic to it. Either it was created by a program, in which case the program itself could pass the Turing test, or it is somehow irreducible to a program's output, in which case in your zealous effort to swat a fly you vaporized the entire Earth; you can't agree to the possibility a machine might pass the test (or be sentient or whatever) but you can agree to the existence of an exponentially complicated source of information? That's only a gnat's whisker away from asking "Well, what if I use magic to create a philosophical zombie," (I'm referencing the specific concept of a philosophical zombie here, you can look it up if you like) "that looks like it's passing the test but it really isn't, what then?" Well, I don't know, when you're willing to invoke magic in your defense I'll concede I haven't got much of a response, but you probably shouldn't call that a victory.
The lookup table argument only makes sense nonconstructively, if you merely assert its existence but then don't allow anyone to ask any question about where it came from, or what properties it has.
Any finite computational system can be implemented as a lookup table in the manner that Searle suggests. But that's not essential to the argument. You can imagine that the man in the room is following instructions for simulating a (finite approximation of) a Turing machine, or any other computational device that you like.
Why not? I realise there are non-Turing complete models of computation that cannot be described by a lookup table (e.g. coinduction), but are they physically realisable?
Scott Aaronson (if memory serves right) had an interesting take on this. He framed his thought in terms of the Turing test, but the argument would apply equally well to the mapped iron bar:
In theory, we could pass any Turing test of finite duration (eg an hour or less) and run in a chat room with finite bandwidth with a giant lookup table. Just look up the entire past of the conversation to see what answer should be given. The lookup can be implemented trivially on any Turing machine (and doesn't even need the machine's full power).
Now there's multiple directions you could take this. Here's Scott with one of them:
> Briefly, Searle proposed a thought experiment—the details don’t concern us here—purporting to show that a computer program could pass the Turing Test, even though the program manifestly lacked anything that a reasonable person would call “intelligence” or “understanding.” (Indeed, Searle argues that no physical system can understand anything “purely by virtue of” the computations that it implements.) In response, many critics said that Searle’s argument was deeply
misleading, because it implicitly encouraged us to imagine a computer program that was simplistic in its internal operations—something like the giant lookup table described in Section 4.1. And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway:
something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and
all sorts of other internal furniture that we would expect to find in a mind.
> Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would
not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient. Furthermore, the latter program would be sentient because it was polynomially-bounded.
Your points are all good. But they have nothing to do with meaning, or with semantics.
Cellular automata are lookup tables, and Wolfram and others proved some cellular automata rules are Turing complete computers. https://en.wikipedia.org/wiki/Rule_110
My point was merely about the equivalence of computational mechanisms, not about lookup tables per se. And by corollary, that the computational complexity is equivalent regardless of the computational mechanism. (I think we agree on this point.)
Searle's Room is a just to explain that what computers are doing is syntactic.
Searle would posit that passing a Turing test in any amount of time is irrelevant to determining consciousness. It's a story we hypothetical use to "measure" intelligence, but it's only a story. it's not a valid test for sentience, and passing such a test would not confer sentience. Sentience is an entirely different question.
What would be more interesting is if a computer intentionally failed turing tests because it thinks turing tests are stupid.
We could test humans with Turing tests to determine their "level" of intelligence. But if you put teenage boys in a room and made them do Turing tests, pretty quick they would come up with remarkable ways to fail those tests, chiefly by not doing them! How could you write a program or create a system that intentionally fails Turing tests? or a program which avoids taking turing tests... because it thinks they are stupid?
Could you write a program that knows when it fails? (it's a pretty long standing problem...)
I like the speed (or space-bound question) you ask because it is not a thought experiment to me. It's an actual real problem I face! at what point does the speed of the underlying computing become so interminably slow that we say something is no longer sentient? In my work, I don't think there is some such slow speed. The slowness simply obscures sentience from our observation.
In the excellent example: "I think it is reasonable to believe that after enough clicks the entity is not sentient..."
How would you distinguish between the "loss" of sentience from reduced complexity, from the loss of your ability to perceive sentience from the reduced complexity? The question is, how could you tell which thing happened? If you don't observe sentience anymore, does that mean it's not there? (Locked in syndrome is similar to this problem in human beings.) And if you have a process to determine sentience, how do you prove your process is correct in all cases?
I do not think of these as rhetorical questions. I actually would like a decent way to approach these problems, because I can see that I will be hitting them if the model I am using works to produce homeostatic metabolic like behavior with code.
Computation is a subset of thinking. There is lots of thinking that is not computation. Errors are a classic example. The apprehension of an error is a representational process, and computation is a representational process. We may do a perfectly correct computation, but then realize the computation itself is the error. (As a programmer learns, it is exactly these realizations that lead to higher levels of abstraction and optimization.)
Searle's point is that a lookup table or any other computational mechanism, can not directly produce sentience because it's behavior is purely syntactic. "Syntax is not semantics and simulation is not duplication." https://www.youtube.com/watch?v=rHKwIYsPXLg
Aaronson's points are very well made, but none of them deal with the problem of semantics or meaning. Because they don't deal with what representation is and how representation itself works. All of the complexity work is about a sub-class of representations that operate with certain constraints. They are not about how representation itself works.
> "suppose there is this big lookup table that physics logically excludes from possibility."... That is the point!
Even if there were such a lookup table, it would not get us to sentience, because it's operations are syntactic. It is functional, but not meaningful. You are correct, it could never work in practice, but it could also never work under absolute conditions. That's why I figured Aaronson was poking fun of those critiquing Searle, because it would ALSO, not work in practice.
Aaronson writes, "I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient."
This statement supports Searle's argument, it doesn't detract from it. Hypothetically, an instantaneous lookup of an exponential table system would not be sentient but an instantaneous lookup of an algorithmically bound table system would be sentient? On what basis then does sentience confer, if the bound is the only difference between the lookup tables? Introducing the physical constraints doesn't change the hypothetical problem.
Searle and Aaronson are just talking about different things.
If Aaronson was actually refuting Searle, what is the refutation he makes?
Aaronson never says something like "Computers will be sentient by doing x, y, and z, and this refutes Searle." The arguments against Searle (which I take Aaronson as poking at) are based in computation. So... show me the code! Nobody has written code to do semantic processing because they don't know how. It could be no one knows how because it's impossible to do semantic processing with computation - directly.
That is my view from repeated failures, there simply is no path to semantics from symbolic computation. And if there is, it's strange voodoo!
simulating complex behavior requires at least 10mb of data
The author has not shown that that much is required. At most, the author has shown that it has been done with that much. Seeing as having only thousands of states is apparently enough to make a Turing machine's behavior independent of ZFC (http://www.scottaaronson.com/blog/?p=2725), I really doubt this "10 megabyte" lower bound.
Indeed. IIRC Searle’s point is that any finite approximation of a Turing Machine (at least if defined over finite inputs) can in principle be replaced by a ginormous look up table. But if it matters, the person in the Chinese room can of course make notes on scraps of paper and implement a system more like a Turing machine.
It’s pretty obvious that it can not correctly represent arbitrary transition tables: Just construct a Turing Machine that has a larger transition table than can be encoded in the language model.
On the other hand this is just a theoretical argument. Every existing computer is also not a Turing Machine as it has finite memory.
Hi Kranar. See my other post with the reference to linear bounded automata. I think you're right on all points.
Also, I agree that "simulates" is probably the wrong word. There would be a one-to-one mapping between FSM states and restricted-TM (or Linear Bounded Automaton) states and it would "simulate" the TM by stepping through mapped states until it reached a terminal state, but the number of states required and the transition table would be ridiculous. I don't think it would exactly be a look-up table, but it's close. (i.e., it would be O(2^N), for a binary alphabet, to construct the transition table, but looking up the terminal state for each input state would also be O(2^N)).
But there are programs that do inferences - and programs work on Turing machines which are just lookup tables (with an infinite tape). Lookup tables can do inferences.
http://www.scottaaronson.com/papers/philos.pdf , section 4. I consider that to be essentially the final word on the topic; it is an objective, mathematically, philosophically, physically meaningful distinction drawn between a lookup table and a computing machine.
You're telling me that a computer cannot simulate steps of a turing machine, one at a time? it can't store the current state of the turing machine, and the rules, in memory, and use the rules to get from one state to the next?
Are you really saying that a computer with too little memory can't do it, or something like that? because it seems blatantly obvious that a computer can simulate a turing machine.
Every computable function can be represented by a (possibly infinite)¹ lookup table.
Computer programs can only compute computable functions. Therefore any computer program is (in theory) equivalent to a table lookup.
¹ For finite inputs, the lookup table can be finite, and for infinite inputs, the lookup table can be infinite but still countable, as the set of computable functions is countable.
Wouldn't a DFA for a computer have to have 2^n states for n bits of memory, so it's size would be exponential to memory size of the computer? So even for a pedestrian 640K of memory, that's like 10^10^7 states. Sounds a lot like trying to simulate a non-deterministic Turing machine on a deterministic machine; the state space explodes exponentially making it intractable.
I might be misunderstanding something, but I don't think an infinite table makes sense? Each machine has a finite set of states and operates on a finite alphabet, so the size of the table will always be at most the product of those two numbers.
This reminds me of the classic problem in computation, where the simplest form of computation, the lookup table, input -> output, is limited to a finite domain. Turing modified the computation to have a finite internal state and infinite external environment (tape), so it becomes a transition function (state, stimulus) -> (new state, response), applied recursively in a feedback loop, allowing it to operate on infinite domains.
Famously a simple lookup table for the transition function then suffices to compute any computable function.
In the Turing machine formalism that was taught at my university(1), the set of symbols that can be stored in a cell must be finite, but can otherwise be anything needed to solve the problem at hand. For each state in the associated state machine, there’s effectively a lookup table that maps the current tape symbol to some other, and an instruction about how to move the tape head for the next step.
This means, in particular, that you can annotate symbols on the tape by having different variants of each input symbol, so long as the number of variants is strictly bounded.
(1) There are many equally-powerful variants, so there’s no guarantee that others will be working with the same definition.
The second you assert that the lookup table can pass a turing test with, eg. a gigabyte of exchange, then the table of every single one gigabyte number in it becomes your state space and program, the page number becomes the state, and you've got just as much complexity as any other simulation with one gigabyte of state. You haven't changed the parameters of the problem at all.
reply