Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

What I find sort of (a little) comforting is that the NSA seems to be relying on zero days. All these leaks have not really revealed any structural backdoor in any of the major operating systems.


view as:

For instance, the LastPass vulnerability was an architectural mistake.

That's my feeling too. Likewise for hardware backdoors. Yes: it's certainly possible that these things exist. But if they do they're exotic and closely held. They aren't part of the routine hacking toolkits in use in the intelligence community. Routine surveillance happens via routine means that we've already baked into threat models.

In this case, it seems (though I can't find confirmation) like standard firewalling of SMB (what you get if you click the "untrusted network" category on connecting to the cafe wifi or whatever) would be enough to protect a user.


It could be necessary for NSA to rely on zero days or implanted bugs. Any entry point must be possible to close quickly, as soon as the enemy discovers it. Creating what you call structural back doors would make it possible for the enemy to use the same structural back door.

Along these lines, I would expect the NSA to encourage the use of cryptography and encrypted software/Secure Boot/secure communications while they ensure the NSA have a set of extra keys and can sign software at will.


You can use public-private key cryptography that the enemy cannot compromise unless they compromise the NSA (the private keys).

Well, there are known stories of "accidental" key leaks: https://www.schneier.com/blog/archives/2016/08/microsoft_acc...

That would still stand out as a sore thumb.

Deliberate introduction of hard to exploit 0days would be precisely how they would do it. All you need is one plant with commit access.


Public key crypto can be broken if the certificate isn't securely pinned on the client. In other words, the adversary could insert their own cert and replicate the exploit.

Wait what? Cert pinning is not required for RSA pkcs.

So how do you get the NSA's public key to the client?


Perhaps this "leak" was intentional and designed to obscure darker dealings by our friendly NSA.

Especially if it has the effect of driving more Windows 10 uptake.

Here's a twist: they're the same thing.

Most seasoned security folks know that the way to backdoor something is to leave an innocent bug in it. Plausible deniability, impossible to prove it was a backdoor because it looks just like any other exploitable bug.

Not that I'm suggesting that the NSA did leave these as backdoors. I don't believe that to be the case. But if you want one, that is how you do it.

If you ever find a blatant backdoor in some software, you're either dealing with an amateur, or someone who wanted to be found in order to send a message/misdirect you.


I was going to say basically the same thing. :+1:

If only there was a means by which you could vote his comment up so as to indicate your agreement.

Yeah man, if only upvoting something could be measured against something other than raw count.

The problems with bugs is that they can be exploited by the bad guys. A door with a public private key can only be exploited by you.

They did have a backdoor in the Dual_EC_DRBG PRNG algorithm that was widely used. It's not a major operating system, but it was used in a lot of products.

https://en.wikipedia.org/wiki/Dual_EC_DRBG


There are diffeeent levels of ethics in hacking:

1) finding a bug and notifying the company

2) finding a bug and releasing/selling

3) finding a bug and using it

4) intentionally adding bugs to software without notifying anyone

5) intentionally adding bugs to software and claiming it's secure

This was level 5


You could potentially argue it's even worse. They paid RSA to intentionally add a bug so they can't be directly blamed for it.

finding a bug and releasing it, and finding a bug and selling it are hardly equivilent.

> NSA had worked during the standardization process to eventually become the sole editor of the Dual_EC_DRBG standard,

Yes that was a devilishly well executed backdoor, on so many levels.


Mostly on the political level; it was considered suspect by serious cryptographers essentially as soon as it was introduced.

> backdoor

A backdoor is far too obvious for widespread use, which is the needed anyway. The NSA (and FVEY in general) instead spends a lot of money on programs like BULLRUN (Edgehill at GCHQ) that try to bypass the need for backdoors and weaken encryption. PSYOPS for nerds[1] is much cheaper and easier than direct backdoors or other technical methods.

Instead of a backdoor we have IPSEC standards that is overly complicated, had to implement, and mandated "null" encryption support[2]. Most communication channels remain in plaintext or encrypted with keys that are recoverable, too short, or easily MitMed.

[1] https://archive.fosdem.org/2014/schedule/event/nsa_operation...

[2] http://www.mail-archive.com/cryptography@metzdowd.com/msg123...


1. I imagine they'd be more or less the same thing. Any mandated/deliberate backdoor is probably going to look very similar to an accidental bug - it lets you deny it exists, gives a valid explanation for if/when it is found, and potentially lets an NSA/software company "double-employee" add it without the company knowing.

2. It'd probably be a method of last resort, so the NSA et al. would gather and use zero days anyway. Any use of the backdoor risks it being noticed, so using other entry points make sense if possible.

A less comforting interpretation would be that relying on zero days suggests they are confident in their ongoing ability to find them and/or have a sizeable cache of unknown exploits already, so adding a deliberate backdoor wouldn't provide any additional access.


> lets an NSA/software company "double-employee" add it without the company knowing.

I always wondered how that works. I am a full time employee at software company. Cannot imagine having extra time to report to another employer (NSA) and deal with their red tape and crap as well.

Or does NSA show up at their doorstep with a bag full of cash - "Here you go, have this, and install a backdoor in your company's software. And we never met <wink>, <wink>"

That sounds good on paper so to speak, I just have a hard time imagining a realistic scenario.

Now finding 0-days and hoarding them, I can see that.


You assume the mole is an MS employee first and an NSA op second. Traditionally, the opposite is true: if you want to infiltrate a somewhat friendly entity, you do it by engineering the hire of trusted individuals. This is more secure, since there is no risk that one of the guys will get cold feet and blow the whistle.

So you monitor universities and you make contact with some of the brightest sparks. You promise them a good job in exchange for the possibility that, one day, they might have to act For The Good of The Country; and in the meantime they'll even be In The Know, which will place them above their peers - excitement! Ambition! Then you lobby a few higher-ups you're friend with, to hire these guys in this or that group. They are top-notch talent, immaculate credentials, so the hire is a slam dunk. They go about their business, being good kernel devs or whatnot, and every few months you give them a quick call to catch up - there is no need for extensive briefing, nobody really cares about the going-ons of Team Kernel A356. When "the favour" is required, the guy is comfortable in his position and doesn't want to leave it, so there is no chance he'll say no.


> Any use of the backdoor risks it being noticed, so using other entry points make sense if possible.

This applies to 0-days as well.


Legal | privacy