Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

If a computer cannot have infinite memory, then it cannot go beyond discrete finite automata (DFA) itself. Undecidable problems are several steps more complex than what a DFA can solve. In other words, the theoretical capability of a computer with finite memory is way less than a computer that can solve all but undecidable problems.


sort by: page size:

> Undecidability requires infinities.

As does the Turing machine itself, or any other turing complete computation model for that matter.

Heck, even the study of computational complexity forces you to accept some arbitrarily large value of n, which implies any fixed finite storage capacity is unacceptably small.

Modern programming does a pretty good job at approximating infinite memory (by using the cloud as a data store)


Where is the infinite tape? Finite automata are as capable, in the real world, as Turing-like machines, because it is impossible to fabricate a machine with infinite storage.

That's the point, yes. A real machine has finite memory, while a proper Turing machine can always advance the tape to get more memory. You can solve the former in ridiculous but non-infinite time, but not the latter.

Turing machine has unbounded memory. There is a difference between unbounded and unlimited memory: You can extend the tape as needed, but at any given step of computation, the used space is finite.

A Turing machine with bounded memory is called linear bounded automaton. Although LBA is less powerful than a Turing machine in the absolute sense, it is not trivial to form a problem that a Turing machine could decide but a LBA could not.


Wouldn't a DFA for a computer have to have 2^n states for n bits of memory, so it's size would be exponential to memory size of the computer? So even for a pedestrian 640K of memory, that's like 10^10^7 states. Sounds a lot like trying to simulate a non-deterministic Turing machine on a deterministic machine; the state space explodes exponentially making it intractable.

>We are all running computers that are finite-state machines

The amount of storage of a computer is not fixed, and is easily extendable, and extending memory fits simply in the Turing Model, and does not affect the internal states of the Turing Machine. In contrast, any time a new source of memory is introduced (say, a new hard drive or aws cluster), you'd have to change your DFA, which breaks your statement 'a computer is a DFA'.

I really don't see how calling computers DFA's can in anyway be considered 'practical'.


You're way off the mark. While it annoys me that computers are constantly modeled as Turing Machines when it's much more appropriate that they be modeled as LBAs (Linear Bounded Automata) it does not have a huge bearing on computational feasibility in practice. While LBAs vs. TMs do have entirely different definitions and notions of what is and what is not undecidable (for instance, the halting problem is decidable for computers since they're LBAs. It's just impractical.) this only really becomes a theoretical difference.

Suggesting that an x86 PC is limited to the level of computational complexity of regular grammars is just wrong. LBAs are capable of context sensitive grammars and are much more capable than a FSM and absolutely can compute more.

In real life, computers can be modeled as turning machines without much issue. Memory is generally Big Enough(tm) that problems undecidable on TMs are terribly infeasible to attempt even though theoretically they are actually decidable.

And as long as the problem has a small enough state space, the computational expressiveness is the same. In practice, this occurs often enough that people still call computers turing machines even though they aren't.


> since all machines are Turing-complete

No real computer is Turing-complete, since it has only a finite amount of memory.


In the real world nothing has infinite memory, so no computer would be turing complete. Therefore this requirement is ignored.

I guess the short way to say it is that "undecidable" doesn't mean "it can't ever be decided", just not always.

And of course all programs of practical significance are finite state machines (since there is only a finite number of atoms in the universe).


> The class of problems solvable in a finite amount of memory is just the class of regular languages. The “finite memory” is the finite state machine used to solve them.

I think they meant to say "constant memory" since every halting Turing Machine uses finite memory.


People always makes these points of things not being technically Turing machines because they are finite, they have finite memory. Well, obviously! Your computer has finite memory too. It doesn't matter in practice, and the conceptual idea that this thing can do arbitrary computations is more important.

Well, I said I meant the tape of a Turing machine (irrespective of its size), or the RAM of a physical computer, and that I suspected that such memory is different from other states in that read/write operations have specific lower time complexity.

> A Turing machine’s infinite tape is what stops it being a finite state machine.

Well, that's if you accept the usual automata theory definition, which was what I was arguing against. Part of having an "infinite tape" memory is not just that it is infinite, but also that it is a tape. A pushdown automaton also has infinite "memory" (though a stack, not a tape memory with random access), but it is still not equivalent to a Turing machine. Nor is it equivalent to a finite state machine.

Basically, what I suspect is that the type of "memory" system that some automaton allows determines its computational power, not whether the memory or the maximum number of its states is infinite or not.


Technically, a computer is also just a large finite state machine. Which suggests computability theory is unable to accurately account for the intuitive difference between computers and simpler automatons. I suspect it has something to to with the computational time/space complexity of the algorithms they are able to execute.

The other controversy that you'll see in a computer science context is that the Turing Machine specified by Turing has an infinite tape (or at least an unbounded tape; there are no conditions under which it can produce an out-of-memory error).

That means that there is no largest number (or other mathematical object) that it can represent, which is different from a physically-realized general-purpose programmable computer. On a physically-realized computer, once you fix some representation for numbers, there will always be a largest one that the computer can actually represent in its memory.

People have even maintained that, from the pure theory of computation point of view, the desktop computer (without threads and interrupts) is most like a finite-state machine rather than a Turing machine. It's just that each different possible memory (plus CPU register) state is a "state", so a computer with 8 GB of RAM will have something above 2^68719476736 states.

However, it can easily simulate a Turing machine given the assumption that the tape doesn't run out for a particular computation, including for computations that in human terms are very large and complex ones. :-)

Edit: One could also include the computer's hard drive as addressable memory, in which case it has even more states when viewed as a finite state machine.

A basic problem from an automata theory point of view is that eventually there are some inputs which need to fit entirely in memory in order to be processed correctly, and yet simply can't fit in memory. At that point, the computer will necessarily have to stop being as powerful as the idealized Turing machine in terms of distinguishing those inputs!


A Turing machine without infinite memory can't recognize context-free languages. Take the standard example of recognizing the language of balanced parentheses. With finite memory, there's a limit to how many parentheses you can keep track of. That limit is going to be gigantic for any reasonable amount of memory, but it's still a limit. The language of balanced parentheses with a depth limit is no longer a context-free language, but a regular language, and as such can be recognized by a FSM.

It's trivial to demonstrate that a Turing machine with finite memory is equivalent to a FSM. Enumerate all possible memory contents. This number is, again, large for any reasonable amount of memory, but it is finite. For each possible memory content, enumerate the Turing machine states. Each memory state plus machine state becomes one FSM state. The Turing machine defines a transition from one state to another plus an action on memory, which becomes an FSM transition to the state that encodes both the new machine state and the new memory state.


> In other words, when you have reasonably big practical finite problem with limited memory. FSA runs quickly out of memory.

Yes this is a difference in the expressiveness of each model, because FSAs can require a combinatorial explosion in the number of states to model a finite Turing machine. The parent is still technically correct though.


Assuming that computer has finitely many states to be in because of finiteness of its memory, we can model it as a very large deterministic finite automaton. You cannot model a real computer as a LBA, because you don't necessarily have as many read-write memory as your input takes. If you think I'm wrong, please, tell me what makes you think that you can model real computers as LBAs.

Its a quick intro to computability theory. There is a progression of more and more complicated 'machines' on the way from Finite Automata towards Turing Machines. This article goes over what types of things each machine can and can't solve. Its more of a theoretical topic than a practical one (and why you'll occasionally see silly things like someone implementing a turing machine in powerpoint to show that powerpoint could compute anything)
next

Legal | privacy