My "I understand now" was just for fun. It's like LOLWUT... given that I'm the author of an in-memory snapshotting DB I belive I know at least the difference between RAM and SSD. So I stopped the thread this way.
That said, seriously, I think that what applies for SSD applies for RAM: that it's going to be cheaper and cheaper, and bigger, super fast, and unlike SSDs the writing and reading latencies are comparable, so even if as today it's a psychological barrier to hold your data in RAM, I think it is going to be much more common in high load applications in the future.
Actually most people are doing it already, with memcached. Sometimes the total memcached memory used could be enough to store the whole dataset well organized given that when you use a K/V cache a lot of space is wasted compared to using it to hold data.
You could keep data in ram and dump it occasionally to disk (which is way faster than committing every transaction for itself). Replicate among few machines to deal with failure scenarios.
The Q is, why don't directly jump to RAM instead to take this intermediate step?
reply