This is possible, and in fact probably implemented in some probabilistic programming languages, but I think you are looking at the wrong direction.
The point is that even for fairly simple real use cases, the computation complexity is so huge, that all computers in the world couldn't compute it in your lifetime if you don't employ some approximation or optimization and stick to naive algorithms.
So, that is what the whole field of machine learning is about: finding some clever ways to deal with random variables in a computationally feasible way...
Interesting. I don't think people realize just how slow rand() can be if it is called frequently in your c/c++ program. Marsaglia's xorshf is the fastest algorithm that I know of that also give a ok statistical quality.
But the values are generally generated pseudo randomly by machine. This seems similar to the birthday problem, where the odds of encountering a value in a given range is higher than you'd expect.
... not with 100% accuracy, but it's totally plausible that you can do substantially better than random (or a simple regex). So there is incremental value here.
1 5 3 8 5 3
Is that a random number sequence? It depends where the data came from. Same goes for AI algorithms. Yes theres a risk of the data being biased, but the key is what goes in, not what comes out.
That sounds like the gambler’s fallacy. Less runs than what? Most truly random input haa far more runs than what people “think” is random, and in fact that’s one of the statistical tests for whether a data set was random.
You’re essentially saying that a good neural network can predict the next value of a good random number generator. Good luck with that one!
Maybe while you’re at it, have neural networks invert cryptographically secure hash functions :)
reply