Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

if something is cryptographically secure, it should not matter how big the threat actor is


sort by: page size:

Then it's not secure. Security is about the whole system, not just the crypto algo being used.

It's defense in depth, not security through obscurity.


There is infinitely more to writing secure software than correctly using secure cryptographic constructs.

Sure, but nobody uses that argument to say that you can't trust AES, RSA or SSL, despite the fact that it applies equally well to them.

It's important to note that even using well-tested, hardened crypto-primitives, you can still design an insecure system.

You’re getting distracted by unrelated concerns. Hint: pay attention to the part where I say

> Regardless of the above, when I say 'system' I'm referring to a cryptosystem, not the other parts of the software stack.

If your cryptosystem is compromised nothing else matters. Your argument seems to be “we shouldn’t worry about secure cryptosystems because these other unrelated things could go wrong,” which I’m not interested in debating with you.


Totally agree.

Most peoples' eyes glaze over when you talk about things like Shannon's Law, cipher strength, entropy, backdoors, authentication vs encryption, trust models, metadata (which IS data, goddammit), and threat models.

This avoids all the complexity without losing the core issue.


I don't say normal developers shouldn't be near "security"; I say they shouldn't be implementing cryptographic primitives.

People get super confused about the differences between abuse prevention, information security, and cryptography.

For instance, downthread, someone cited Kerckhoffs's principle, which is the general rule that cryptosystems should be secure if all information about them is available to attackers short of the key. That's a principle of cryptography design. It's not a rule of information security, or even a rule of cryptographic information security: there are cryptographically secure systems that gain security through the "obscurity" of their design.

If you're designing a general-purpose cipher or cryptographic primitive, you are of course going to be bound by Kerckhoff's principle (so much so that nobody who works in cryptography is ever going to use the term; it goes without saying, just like people don't talk about "Shannon entropy"). The principle produces stronger designs, all things being equal. But if you're designing a purpose-build bespoke cryptosystem (don't do this), and all other things are equal (ie, the people doing the design and the verification work are of the same level of expertise as the people whose designs win eSTREAM or CAESAR or whatever), you might indeed bake in some obscurity to up the costs for attackers.

The reason that happens is that unlike cryptography as, like, a scientific discipline, practical information security is about costs: it's about asymmetrically raising costs for attackers to some safety margin above the value of an attack. We forget about this because in most common information security settings, infosec has gotten sophisticated enough that we can trivially raise the costs of attacks beyond any reasonable margin. But that's not always the case! If you can't arbitrarily raise attacker costs at low/no expense to yourself, or if your attackers are incredibly well-resourced, then it starts to make sense to bake some of the costs of information security into your security model. It costs an attacker money to work out your countermeasures (or, in cryptography, your cryptosystem design). Your goal is to shift costs, and that's one of the levers you get to pull.

Everybody --- I think maybe literally everybody --- that has done serious anti-abuse work after spending time doing other information security things has been smacked in the face by the way anti-abuse is entirely about costs and attacker/defender asymmetry. It is simply very different from practical Unix security. Anti-abuse teams have constraints that systems and software security people don't have, so it's more complicated to raise attacker costs arbitrarily, the way you could with, say, a PKI or a memory-safe runtime. Anti-abuse systems all tend to rely heavily on information asymmetry, coupled with the defender's ability to (1) monitor anomalies and (2) preemptively change things up to re-raise attacker costs after they've cut their way through whatever obscure signals you're using to detect them.

Somewhere, there's a really good Modern Cryptography mailing list post from... Mike Hamburg? I think? I could be wrong there --- about the Javascript VM Google built for Youtube to detect and kill bot accounts. I'll try to track it down. It's probably a good example --- at a low level, in nitty-gritty technical systems engineering terms, the kind we tend to take seriously on HN --- of the dynamic here.

I don't have any position on whether Meta should be more transparent or not about their anti-abuse work. I don't follow it that closely. But if Cory Doctorow is directly comparing anti-abuse to systems security and invoking canards about "security through obscurity", then the subtext of Alec Muffett's blog post is pretty obvious: he's saying Doctorow doesn't know what the hell he's talking about.


One thing I like about cryptography, and wish we could get politicians to understand, the mathematical steps to implement many of the core concepts are not hard. Obviously implementing them robustly such that they’re safe at scale is another matter, but “criminals” won’t have any difficulty getting access to secure implementations, and adding any kind of weakness only means that the implementations used by the public are more likely to have bugs, in addition to the legally mandated ones.

When it comes to cryptography, I don't think the burden of proof is on the critics to prove it's insecure. Everything is best assumed to be insecure unless there's convincing evidence otherwise.

I gather that there are enough experts in this sort of thing that aren't convinced that it seems fair to say it's insecure.

Ex: If someone built a bridge, but wasn't an actual engineer, I would assume the bridge was unsafe. I don't need an engineer to actually inspect the bridge before I make that assumption, and I would probably tell everyone I knew not to use that bridge.


Downside of Secure Crypto: No Backdoors /s

But it's also important to be clear that a complex script large enough to have (critical!) security holes was required to do something as simple and as foundational as multi-sig: the basic building block of all distributed or decentralized protocols. How can possibly expect something more complex to be secure?

Bitcoin, on the other hand, builds these core crypto competencies into the base layer, so that scripting can exist purely in the application domain.


I don't think practical cryptographers ignore the difference between computations and interactions—there are different threat models and they are carefully studied. Part of the problem might be that a system designed to be safe against 2^32 interactions is deployed in a place where the system is vulnerable to attacks on the order of 2^32 computations.

The problem I think the author is highlighting is that the mistake risk distribution is hard to understand. Some kinds of off by one errors may cause relatively small reductions in security margins. Some kinds may cause complete loss of system integrity. It's hard to distinguish between the two without extensive testing and expertise. Furthermore, the kind of user who would never tell you about your error is exactly the kind who will find it.

Finally, loss of your security infrastructure is unlikely to cause just small visual discomfort to your users—it's likely to hurt them materially.


That's really not true. I think you're conflating cryptography with security. In crypto, I suppose you could consider algorithms that increase attacker cost as "obstacles", though I think the word loses meaning when the "obstacle" involves summoning more CPU cores than there are atoms in the solar system.

In practical security, closing a buffer overflow, sanitizing inputs, and proving code paths are not "obstacles". There are a finite number of vulnerabilities in any piece of code.


I agree.

Most attacks on secure systems involve attacking the engineering - the implementation of the system, rather than even attempting to break the crypto, even if it is only DES.


Are you just attempting to argue the pedantic point that some theoretical subset of homebrew crypto applications may actually be secure? Because taken as practical advice your position requires a lot of awfully strong assumptions.

The entire field of application security and cryptanalysis begs to differ. It's always an arms race.

Cryptographers ought to be suspect of everything. Reputation isn't nothing, but the key part of a design is what can be proved about it, not who proposed it.

Which is all fine, and it's all very valuable work of course.

The problem is when cryptographers say something is "proven secure" it turns out not to mean that nobody can hack it, especially in an actual deployed system.

This is surprising to many people.

next

Legal | privacy