Do you realize how much of a performance hit modern desktops would take if a processor that had to freeze the operating system while it wandered through memory first identifying the operating system and then finding the entropy pool and modifying it?
>Presumably using the same skynet tech it uses to look ahead and see where the rdrand is going to be xored into.
There is no such tech. That's why this is not 'checkmate'. A poisoned random number that generates numbers in a predictable manner is orders of magnitude easier to implement and less possible to detect than a magical processor that changes memory it thinks might be entropy for some operating systems it has been pre-programmed to look for under the assumption that kernel will never change ever. Get real.
> AES has to look like random noise. If there is any correlation to the input detectable by someone without the key it would be useless for crypto.
Uh... AES (to pick your particular example) is a symmetric algorithm. By definition there is a 1:1 correlation to the input. If you have the key and know the input you can compute the output, and vice versa. The question at hand is not whether the algorithm is breakable but whether someone can find the key (or more generally the PRNG state).
And that's what the entropy pool does: it seeds the PRNG state with values that are derived at runtime by the kernel and not under the control of an attacker, even one embedded in the system.
There are side arguments about whether the entropy pool's estimates are good ones, about whether it should block by default when empty, etc... But I don't see any reasonable arguments that this isn't a useful feature for an OS kernel to provide.
> Imagine an attacker knows everything about your random number generator's internal state ... But over time, with more and more fresh entropy being mixed into it, the internal state gets more and more random again.
Of course that probably only only counts if an external or local unprivileged entity manages to become informed of the random source's internal state. If the attacker has direct access to the kernel's state then they probably have access to influence (or at least monitor) the incoming entropy such that they can stay informed of the full internal state.
There is a point in some attacks beyond which your only half way guaranteed solution is the metaphorical orbital nuke platform.
The processor could recognize that it is in common key generation code, and generate 'entropy' that precisely reverses most of the existing entropy. The existing 'good' entropy could be known by the hardware PRNG.
> It's best to think of this as an OS/distro detail; if you can reasonably expect /dev/urandom to give you insecure bits, your distro has a vulnerability.
Isn't that more a function of hardware than software? The hardware random number generators on modern CPUs pretty much eliminate the need to worry about entropy...
If you assume the CPU is compromised, then mixing in RdRand among other entropy sources is still not safe. For instance, the RdRand instruction could set a flag that, with some small probability, zeros out anything XORed with it so you occasionally get zeros in the entropy pool. Or some other known-to-the-bad-guys value.
The attacker who had already compromised the integrity of the system in question has to guess or probe for a random number with relatively low entropy in order to do something useful and straightforward with that already compromised system.
> Don't we already have "true" random number generators embedded on most modern processors? I was under the impression most motherboards have some sensors strictly for this purpose--to read data from static or thermal sensors in order to add entropy to the pool.
Paranoid people might fear that those have been compromised à la Dual_EC_DRBG.
Though of course, this could be in a userland library too. Just reading entropy from HW (exposed by the kernel). But the kernel still needs access to random number it can control and trust.
> If you don't trust the hardware, then you've already lost, no matter what algorithmic construction you are using. How are you going to trust your random number generator if 1 + 1 = 2 except when it equals NSA?
You can verify the random number generator. If you know the algorithm and the seed values you can run it on multiple different platforms, or with a pen and paper and verify that the output is as expected and repeatable. If you have large amounts of entropy you are feeding into it, you can log it for testing purposes.
There are also apparently some EC based algorithms that can be used to fix or at lease reduce the impact of a compromised random number generator.
That might not protect against a active attack on your specific system by the NSA (they could send/embedded a magic packet that gives them total control over the CPU for example), might even be possible for it to happen on the NIC controller rather than the CPU if it has access to the system bus. At the least they could flip it into some kind of backdoor random number mode by embedding some extra data in a TLS handshake or whatever. But it should protect against widespread passive surveillance.
It could be that the Linux kernel random number generator has been backdoored on all the large cloud computing platforms. They could be even snooping the entropy pool in memory, as the system is operating? You don't know what's really going on in a virtualized environment? Also many BMCs have JTAG access to the CPU, what's the chance that they have implants in the BMCs, knowing how insecure they are?
And? The fact of the matter is, there's no workable alternative. You either have a good source of entropy, or you don't. If you don't, people aren't going to stop using things like OpenSSH. If a box is blocked on boot that's universally considered broken and everybody will try to work around it.
Fortunately in practice the situation isn't so dire. There are usually multiple hardware RNGs on a system. They may be all untrusted, but I'll trust them together before I'll ever trust the strength (and persisting correctness) of guesstimators. Ironically, entropy guesstimators are predicated on the very notion that the hardware is benign. Malicious hardware could implement subtle timing patterns in interrupts, much like what they might do for an actual RNG (e.g. use an AES encryption function to "randomize" their visible behavior). And in any event, you can still mix various system timing events into your pool without pretending you can quantitatively and reliably know their entropic contribution.
Yeah, I've always found it odd how proponents of various entropy combining methods think it's reasonable to be afraid of hardware smart enough to know when to sabotage the RNG, but think it's unrealistic to postulate that it would also know the right way to poison their entropy pool.
You're absolutely right. I didn't mean to imply that it was. But purpose built hardware has design specifications that can be validated. Good hardware is designed with failure modes in mind. Which means you have some hope of a decent risk management strategy.
And, exactly as you said, using and relying on a hardware RNG isn't a panacea--it still comes with difficult problems--which only emphasizes how hopeless, futile, and misleading entropy estimation is.
I'm not arguing that trying to indirectly capture entropy is a bad idea, even if you also have a proper hardware RNG. My point is just that entropy estimation is a poor idea, particularly of hardware that hasn't been specifically analyzed. By pretending like we can reliably quantify the entropy, we're giving users a false sense of security. That's not justifiable.
Collecting entropy indirectly is best effort. You can't know, except in pathological cases, whether it's working or not. You can only know when it's failing spectacularly, which would be rare when collecting from many different sources.
All those magic, per-device constants could likely be replaced with a single heuristic--collect everything for N seconds, then move on. If it worked, it worked; if not, well then you're not any worse off than you were when the entropy estimators gave a false sense of confidence. You can still add hacks to try to collect more entropy, and to do it faster, to improve effective security; just don't pretend like the kernel is accurately quantifying it.
How would the processor even know it's performing crypto operations that it should swap numbers for?
reply