Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> all existing AI systems are obviously halting computations simply because they are acyclic dataflow graphs

No they aren't. Think about LSTMs for example.



view as:

So how do you get a value out of an LSTM?

Run it for as long as you want and look at the output state.

How do you the a value out of a person?


So then in the statement of the theorem the agent A can determine how many cycles the unit will run before halting, correct?

By determine do you mean predict or choose?

I mean by looking at the source code for the neural network someone can give an upper bound on how many steps will be required before the entire network halts and gives an answer and they can prove that their upper bound is really an upper bound.

They need to know the input length too. With an infinite input it will never halt.

Yes, that's the usual assumption when working with Turing machines and proofs. But I guess you could also allow infinite inputs and it wouldn't make that much difference, e.g. computing exp(x) for some real x as input.

Legal | privacy