Your points are all good. But they have nothing to do with meaning, or with semantics.
Cellular automata are lookup tables, and Wolfram and others proved some cellular automata rules are Turing complete computers. https://en.wikipedia.org/wiki/Rule_110
My point was merely about the equivalence of computational mechanisms, not about lookup tables per se. And by corollary, that the computational complexity is equivalent regardless of the computational mechanism. (I think we agree on this point.)
Searle's Room is a just to explain that what computers are doing is syntactic.
Searle would posit that passing a Turing test in any amount of time is irrelevant to determining consciousness. It's a story we hypothetical use to "measure" intelligence, but it's only a story. it's not a valid test for sentience, and passing such a test would not confer sentience. Sentience is an entirely different question.
What would be more interesting is if a computer intentionally failed turing tests because it thinks turing tests are stupid.
We could test humans with Turing tests to determine their "level" of intelligence. But if you put teenage boys in a room and made them do Turing tests, pretty quick they would come up with remarkable ways to fail those tests, chiefly by not doing them! How could you write a program or create a system that intentionally fails Turing tests? or a program which avoids taking turing tests... because it thinks they are stupid?
Could you write a program that knows when it fails? (it's a pretty long standing problem...)
I like the speed (or space-bound question) you ask because it is not a thought experiment to me. It's an actual real problem I face! at what point does the speed of the underlying computing become so interminably slow that we say something is no longer sentient? In my work, I don't think there is some such slow speed. The slowness simply obscures sentience from our observation.
In the excellent example: "I think it is reasonable to believe that after enough clicks the entity is not sentient..."
How would you distinguish between the "loss" of sentience from reduced complexity, from the loss of your ability to perceive sentience from the reduced complexity? The question is, how could you tell which thing happened? If you don't observe sentience anymore, does that mean it's not there? (Locked in syndrome is similar to this problem in human beings.) And if you have a process to determine sentience, how do you prove your process is correct in all cases?
I do not think of these as rhetorical questions. I actually would like a decent way to approach these problems, because I can see that I will be hitting them if the model I am using works to produce homeostatic metabolic like behavior with code.
Computation is a subset of thinking. There is lots of thinking that is not computation. Errors are a classic example. The apprehension of an error is a representational process, and computation is a representational process. We may do a perfectly correct computation, but then realize the computation itself is the error. (As a programmer learns, it is exactly these realizations that lead to higher levels of abstraction and optimization.)
Searle's point is that a lookup table or any other computational mechanism, can not directly produce sentience because it's behavior is purely syntactic. "Syntax is not semantics and simulation is not duplication." https://www.youtube.com/watch?v=rHKwIYsPXLg
Aaronson's points are very well made, but none of them deal with the problem of semantics or meaning. Because they don't deal with what representation is and how representation itself works. All of the complexity work is about a sub-class of representations that operate with certain constraints. They are not about how representation itself works.
> "suppose there is this big lookup table that physics logically excludes from possibility."... That is the point!
Even if there were such a lookup table, it would not get us to sentience, because it's operations are syntactic. It is functional, but not meaningful. You are correct, it could never work in practice, but it could also never work under absolute conditions. That's why I figured Aaronson was poking fun of those critiquing Searle, because it would ALSO, not work in practice.
Aaronson writes, "I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient."
This statement supports Searle's argument, it doesn't detract from it. Hypothetically, an instantaneous lookup of an exponential table system would not be sentient but an instantaneous lookup of an algorithmically bound table system would be sentient? On what basis then does sentience confer, if the bound is the only difference between the lookup tables? Introducing the physical constraints doesn't change the hypothetical problem.
Searle and Aaronson are just talking about different things.
If Aaronson was actually refuting Searle, what is the refutation he makes?
Aaronson never says something like "Computers will be sentient by doing x, y, and z, and this refutes Searle." The arguments against Searle (which I take Aaronson as poking at) are based in computation. So... show me the code! Nobody has written code to do semantic processing because they don't know how. It could be no one knows how because it's impossible to do semantic processing with computation - directly.
That is my view from repeated failures, there simply is no path to semantics from symbolic computation. And if there is, it's strange voodoo!
Scott Aaronson (if memory serves right) had an interesting take on this. He framed his thought in terms of the Turing test, but the argument would apply equally well to the mapped iron bar:
In theory, we could pass any Turing test of finite duration (eg an hour or less) and run in a chat room with finite bandwidth with a giant lookup table. Just look up the entire past of the conversation to see what answer should be given. The lookup can be implemented trivially on any Turing machine (and doesn't even need the machine's full power).
Now there's multiple directions you could take this. Here's Scott with one of them:
> Briefly, Searle proposed a thought experiment—the details don’t concern us here—purporting to show that a computer program could pass the Turing Test, even though the program manifestly lacked anything that a reasonable person would call “intelligence” or “understanding.” (Indeed, Searle argues that no physical system can understand anything “purely by virtue of” the computations that it implements.) In response, many critics said that Searle’s argument was deeply
misleading, because it implicitly encouraged us to imagine a computer program that was simplistic in its internal operations—something like the giant lookup table described in Section 4.1. And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway:
something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and
all sorts of other internal furniture that we would expect to find in a mind.
> Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would
not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient. Furthermore, the latter program would be sentient because it was polynomially-bounded.
Any finite computational system can be implemented as a lookup table in the manner that Searle suggests. But that's not essential to the argument. You can imagine that the man in the room is following instructions for simulating a (finite approximation of) a Turing machine, or any other computational device that you like.
I've never been able to see why one would give the lookup table any credence. It's just kicking the can down the road one step in terms of abstraction.
The second you assert that the lookup table can pass a turing test with, eg. a gigabyte of exchange, then the table of every single one gigabyte number in it becomes your state space and program, the page number becomes the state, and you've got just as much complexity as any other simulation with one gigabyte of state. You haven't changed the parameters of the problem at all.
Indeed. IIRC Searle’s point is that any finite approximation of a Turing Machine (at least if defined over finite inputs) can in principle be replaced by a ginormous look up table. But if it matters, the person in the Chinese room can of course make notes on scraps of paper and implement a system more like a Turing machine.
Very interesting. Cognition and Turing equivalence are not things I automatically assume to be the same, so the paper's title is a question I hadn't stopped to consider previously.
"Does it tell us anything about the universality of computation as a property of the universe?"
It tells us that computation is counter-intuitively easy to access. Our intuition says that computers are hard; look how hard they are for us to build, after all! But instead the reality is that if you try to exclude computation from a non-trivial system, it's actually fairly hard.
(I think part of this intuition may also be based on our intuitive experience mostly being with von Neumann machines. Contra some other contrarians, I think we work with them for a good reason; as wild as they can be, they are much tamer than many other systems capable of universal computation. Compare with trying to work in Rule 110 directly as our primary computation method, or cellular automata in general. Many of these accidental TC systems are very strange and would be hard to work with. Getting a universal computation system nice and friendly to humans accidentally is pretty hard. Getting a universal computation system without that qualification is like falling off a bike.)
https://www.gwern.net/Turing-complete is a good overview both of this concept, and includes many interesting consequences. I also particularly recommend the section title "Security Implications".
I think you could make the same argument for lots of things computers do. Could Grand Theft Auto 4, including the character AI and per-pixel 3D rendering, be implemented by people following instructions on cards? Yes, because Turing machines yadda yadda. But it's inconceivable to non-programmers. The Chinese room argument is convincing for the same reason: doing something AI-like requires billions of steps and non-programmers can't imagine building up something that complex from primitive operations.
That’s a solid counter argument. I assume you’re saying we don’t know of a single physical process that’s not a Turing machine because all of the models we have built to simulate them are Turing computable? That’s a strong point. Maybe I should readjust my prior on this. What of the counter that we have only very primitive models of how the works works? After all we can simulate how the smallest atoms work to more complicated chemical interactions. Still, as it gets to biology or ability to simulate breaks down rapidly. I don’t mean at a performance level, but at a “we’re waaaaaaay off”. Drug discovery is one I’m thinking of. Or predicting someone’s facial features from their DNA (that last one always feels extremely dubious but news friendly). I guess that’s too small a gap for non computability to live?
Has it been investigated on the theoretical side whether the interaction of networks of Turing machines (whether intelligent or not) all interacting is itself Turing computable? Would that simulation itself be an instance of hypercomputation? I would think so but I only had cursory courses on this in undergrad and I never kept up with my math skills to follow along the research in anything more than a popular math understanding of it.
My point being is that from a computational complexity standpoint, your computer is not a Turing machine, just a very very very large FSM. So it's still very useful to think of it as a Turing machine.
Here’s something I don’t get about Wolfram and insisting on a computation-like underbelly of the universe.
Computation is built on the idea of Turing machine. But what is reading the tape in Turing’s analogy? A human! The tape and Turing machine are designed so that every humanagrees upon its formal validity.
It’s not a statement about mental states or computation based “reality”. More than all of those and first, it is a about how society can use social rule-following and basic step-by-step processes (using language) to create formal systems.
A human reading the tape or even a human with pencil and paper. That is the main analogy.
So why do so many like Wolfram think computation is reality? It seems from the get go he is headed down the wrong track.
You and I are computers and computers can think according to Turing. But TM’s came about to develop formal systems.
To me Wolfram and the Churchlands seem to have completely unjustified claims.
As soon as you add these physical constraints on what counts as a 'computer' you're no longer talking about computers as specified by turing, nor computer science -- which is better called Discrete Mathematics.
You're conflating the lay sense of the term meaning 'that device that i use' with the technical sense. You cannot attribute properties of one to the other. This is the heart of this AI pseudoscience business.
All circles are topologically equivalent to all squares. That does not mean a square table is 'equivalent' to a circular table in any relevant sense.
If you want to start listing physical constraints: the physical state can be causally set deterministically, the physical state evolves causally, the input and output states are measurable, and so on -- then you end up with a 'physical computer'.
Fine, in doing so you can exclude the air. But you cannot exclude systems incapable of transfering power to devices (ie., useless systems).
So now you add that: a device which, through its operation, powers other devices. You keep doing that and you end up with 'electrical computers' or a very close set of physical objects with physical propeties.
By the time you've enumerated all these physical properties, none of your formal magical 'substrates dont matter' things apply. Indeed, you've just shown how radically the properties of the substrate do apply -- so many properties end up being required.
Now, as far as brains go -- the properties of 'physical computers' do not apply to them: their input/output states may be unmeasurable (eg., if QM is involved); they are not programmable (ie., there is no deterministic way to set their output state); they do not evolve in a causally deterministic way (sensitive to biochemical variation, randomness, etc.).
Either you speak in terms of formalism, in which case you're speaking in applicable non-explanaotry toys of discrete mathematicans'; or you start trying to explain actual physical computers and end up excluding the brain.
All this is to avoid the overwhelmingly obvious point: the study of biological organisms is biology.
I think you're missing the point. Unlike in the case of an ideal Turing machine, nothing in this universe transitions through states without energy. Computers rely on the power grid to run their processor cycles, even though I doubt that you would consider computers to not be Turing complete.
This is not that different from the Excel example: the system displays Turing completeness if supplied a clock as input (a person clicking a button in the spreadsheet, for instance). The human is not making any decisions in this case - the logic manifests itself entirely from within the system.
While I agree that comparing a human brain or mind to a Turing machine is not helpful, the objection you make here is less significant than it first appears.
There is a subtle difference between unbounded recursion, which a Turing machine is taken to be capable of, and the actual ability to achieve infinite recursion. In no application of a Turing machine, either as an actual physical device or as a hypothetical one in a logical argument, is it ever required to perform infinite recursion, which would just be one way of not halting.
For all practical and theoretical purposes, what matters is that the machine being considered does not exhaust its ability to recurse while performing the computations being considered. Consequently, the standard practice, of saying that computers and certain other devices are Turing-equivalent, with the usually-implicit caveat of being so up to the limit of their recursive ability, is both reasonable and useful.
> In computability theory, a system of data-manipulation rules (such as a model of computation, a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine (devised by English mathematician and computer scientist Alan Turing).
If i understand it correctly (which is doubtful), in section 2, the cellular automaton conjecture is dismissed because 110 as a computer suffers from exponential slowdown.
>"To prove that 110 is NP-complete, what is needed is to show that Rule 110 allows efficient simulation of Turing machines"
But if the universe is a 'computation', then the efficiency of it doesn't really matter right?
Similar to creating a Blender rendering, it doesn't matter how long each frame takes to render compared to any other frame, as long as the observer sees a smooth sequence of images.
To consider any system Turing complete you have to extend it to having infinite memory. This is why all actual, non-theoretical computers are equivalent to finite state machines (or linear bounded automata [0]).
Cellular automata are lookup tables, and Wolfram and others proved some cellular automata rules are Turing complete computers. https://en.wikipedia.org/wiki/Rule_110 My point was merely about the equivalence of computational mechanisms, not about lookup tables per se. And by corollary, that the computational complexity is equivalent regardless of the computational mechanism. (I think we agree on this point.)
Searle's Room is a just to explain that what computers are doing is syntactic.
Searle would posit that passing a Turing test in any amount of time is irrelevant to determining consciousness. It's a story we hypothetical use to "measure" intelligence, but it's only a story. it's not a valid test for sentience, and passing such a test would not confer sentience. Sentience is an entirely different question.
What would be more interesting is if a computer intentionally failed turing tests because it thinks turing tests are stupid.
We could test humans with Turing tests to determine their "level" of intelligence. But if you put teenage boys in a room and made them do Turing tests, pretty quick they would come up with remarkable ways to fail those tests, chiefly by not doing them! How could you write a program or create a system that intentionally fails Turing tests? or a program which avoids taking turing tests... because it thinks they are stupid?
Could you write a program that knows when it fails? (it's a pretty long standing problem...)
I like the speed (or space-bound question) you ask because it is not a thought experiment to me. It's an actual real problem I face! at what point does the speed of the underlying computing become so interminably slow that we say something is no longer sentient? In my work, I don't think there is some such slow speed. The slowness simply obscures sentience from our observation.
In the excellent example: "I think it is reasonable to believe that after enough clicks the entity is not sentient..."
How would you distinguish between the "loss" of sentience from reduced complexity, from the loss of your ability to perceive sentience from the reduced complexity? The question is, how could you tell which thing happened? If you don't observe sentience anymore, does that mean it's not there? (Locked in syndrome is similar to this problem in human beings.) And if you have a process to determine sentience, how do you prove your process is correct in all cases?
I do not think of these as rhetorical questions. I actually would like a decent way to approach these problems, because I can see that I will be hitting them if the model I am using works to produce homeostatic metabolic like behavior with code.
Computation is a subset of thinking. There is lots of thinking that is not computation. Errors are a classic example. The apprehension of an error is a representational process, and computation is a representational process. We may do a perfectly correct computation, but then realize the computation itself is the error. (As a programmer learns, it is exactly these realizations that lead to higher levels of abstraction and optimization.)
Searle's point is that a lookup table or any other computational mechanism, can not directly produce sentience because it's behavior is purely syntactic. "Syntax is not semantics and simulation is not duplication." https://www.youtube.com/watch?v=rHKwIYsPXLg
Aaronson's points are very well made, but none of them deal with the problem of semantics or meaning. Because they don't deal with what representation is and how representation itself works. All of the complexity work is about a sub-class of representations that operate with certain constraints. They are not about how representation itself works.
> "suppose there is this big lookup table that physics logically excludes from possibility."... That is the point!
Even if there were such a lookup table, it would not get us to sentience, because it's operations are syntactic. It is functional, but not meaningful. You are correct, it could never work in practice, but it could also never work under absolute conditions. That's why I figured Aaronson was poking fun of those critiquing Searle, because it would ALSO, not work in practice.
Aaronson writes, "I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient."
This statement supports Searle's argument, it doesn't detract from it. Hypothetically, an instantaneous lookup of an exponential table system would not be sentient but an instantaneous lookup of an algorithmically bound table system would be sentient? On what basis then does sentience confer, if the bound is the only difference between the lookup tables? Introducing the physical constraints doesn't change the hypothetical problem.
Searle and Aaronson are just talking about different things.
If Aaronson was actually refuting Searle, what is the refutation he makes?
Aaronson never says something like "Computers will be sentient by doing x, y, and z, and this refutes Searle." The arguments against Searle (which I take Aaronson as poking at) are based in computation. So... show me the code! Nobody has written code to do semantic processing because they don't know how. It could be no one knows how because it's impossible to do semantic processing with computation - directly.
That is my view from repeated failures, there simply is no path to semantics from symbolic computation. And if there is, it's strange voodoo!
reply