Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That's what I mean by more touchy. I think you also can't do this without messing with system integrity protection. And, "1" is the lowest it can go which is not that low IMO. :)


sort by: page size:

That’s not every minor bump, though, and is worth the extra effort!

Also, my use case is 100% non critical from a security standpoint, so I can afford to be careless... your point definitely stands in sensitive environments


I would guess the concerns are less about corrupting results, and more about leakage power and switching speed. If you have a very leaky pulldown, switching to "1" is going to be slower.

Could use none. Low because if something changes elsewhere and we inadvertently start using RSA by accident, for example. But yeah if it’s not med/high it’s not necessary to fix. Maybe some mitigation where appropriate.

Careful with this, the increased resistance can damage systems that aren't designed for it.

True, though it is much easier to damage a pin-based CPU in the first place.

They are in 0g environments presumably having 1-6th isn't as bad and there might be ways to prevent/mitigate those issues.

As I understand it, generally the big issue between 25519 and things like 448 is that you want the security level of your curve to meet the security level of the rest of your system, or else the extra work you're doing in the rest of your system is for nothing.

Downgrading the component to before the introduction of the flaw would be safer by far.

An SSD cell could actually be more susceptible, since it stores multiple bits by using multiple voltage thresholds - it takes much less perturbation to change the value. Cell size might make a big difference though, and I don't know how those compare.

Yes, every particle crossing the copper can create an anomalous signal that can switch a 0 to 1 or visa versa. If you have enough of those, the program(s) will eventually crash. On the processors themselves the L1/L2 caches are vulnerable, but beyond that, the ROM could also get corrupted making hard resets impossible even after a crash.

Fiber optic cables aren't immune to this either : http://misspiggy.gsfc.nasa.gov/tva/meldoc/cabass/rad.htm


You too might be intressted in the link I posted above. Its about building safe hardware (and software), but I don't think its completly safe to anykind of random bit-flips.

That's actually pretty cool! But really same mitigation should just happen on the normal syscall path for an even smaller perf impact.

I'm talking about a hardware mitigation that would allow software isolation to work again, without disabling speculation.

No the other way around - easily emittable, letting the backend in turn do the heavy lifting to lower it.

Reread this amazing answer a few more times. Humbling. Also just concluded that I literally know only enough about hardware to be dangerous. Thanks once more.

That is an interesting idea, and on first blush I do think it'd be safer, but it does rely on figuring out exactly when the flaw was introduced, and will result in the "YoYoing" of features, which is confusing from a consumer standpoint.

> But is "harder" and "slower" better? Or it just gives humans more time to correct errors?

For risk mitigation, these are somewhat equivalent. There's a reason critical switches get molly-guarded.


Regarding [1], I think there's pain to be had with memory barriers and such like unless one is very careful. In short, they're expensive.

That's a system that would still be entirely vulnerable to collision attacks, though, right?
next

Legal | privacy