I mean by looking at the source code for the neural network someone can give an upper bound on how many steps will be required before the entire network halts and gives an answer and they can prove that their upper bound is really an upper bound.
Yes, that's the usual assumption when working with Turing machines and proofs. But I guess you could also allow infinite inputs and it wouldn't make that much difference, e.g. computing exp(x) for some real x as input.
No they aren't. Think about LSTMs for example.
reply