Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Yes. It was part of why certain assurance activities were in TCSEC at B3/A1 levels. It's why they had to model every behavior, every information flow, allow independent replication, and allow building from source locally. Leaves little room to hide backdoors.

https://en.m.wikipedia.org/wiki/Trusted_Computer_System_Eval...

DO-178C also traces code to requirements to spot dead code or back doors.



sort by: page size:

Yes, assuming some degree of security of the full TCB (both the software and hardware). Of course, no system is bulletproof, but it's slowly going in the right direction - and hopefully since Spectre/Meltdown people are taking more care :)

At the end of the day, it doesn't have to be perfect, just another layer in the Swiss cheese model.

> able to flip one single bit (load?) somewhere inside the SoC

TBH, capabilities at that level (i.e., within the SoC) are fairly difficult to pull off, so probably not in most threat models.


I like your explanation about layered solutions to security, +1.

"Do you trust your chipsets?"

Certainly not. I do believe that the recent tiny bytes sequences in any TCP (UDP ?) packet that can lock Intel ethernet cards is actually a backdoor allowing the state to perform DoS at will. I do also believe Huawei and ZTE are state-sponsored espionage companies (I've certainly seen weird things like a keylogger inside a 3G Huawei USB device sold I bought in Europe).

But I do believe that even if I'm, say, a Debian or OpenBSD dev working on OpenSSL it's amazingly complicated for the chipset to modify source code and be able to make to the DVCS unnoticed. I also think that as long as the source code isn't corrupted there are ways to create non-backdoored builds.

It's the same thing with program provers that can verify that certain piece of code are guaranteed to be free of buffer overrun/overflow: what proves that the compiler itself hasn't been tampered with? But still... With DVCSes and many eyeballs I'm not that much concerned about the compilers typically used nowadays to be tampered with.


It's true but that's a specification problem. The hardware should be in the model since it's in the TCB. The earliest tools used for high-assurance security, like Gypsy Verification Environment, had information-flow analysis for detecting covert channels. Today, we have information flow control, distributed version of it, and hardware versions of non-interference that go to the gates. We're actually past being able to handle that with current tooling if ultra-high-performance is a non-goal.

https://www.usenix.org/system/files/conference/usenixsecurit...

http://www-bcf.usc.edu/~wang626/pubDOC/EldibWS14_TOSEM.pdf

It's analog and RF side channels that's going to screw them. DOD et al did decades of cat and mouse games under that with their TEMPEST activities. Open research just getting started. Just waiting for The Flood. :)


Note that you also have to trust the /tools/ that generate the circuits. Nobody's doing to check every single gate on the chip against the source code; it would be easy for a VHDL compiler to lay down extra stuff.

Shades of "Reflections on Trusting Trust," but in hardware. Doesn't have a complete replication loop, though, which would have the compromised hardware re-infecting the very VHDL compilers that generated the chip backdoor :-)


I meant confidence in what came after it. Meaning the likelihood of a backdoored and/or weak software was likely.

Sure we can't go around trusting trust. OTOH most of the compilers in general use see a number of eyeballs. Ditto for the operating systems. I could even see this becoming the case for hardware eventually. An evil system must model the system that relies on it in order to attack that relying system, while remaining functional in general. The longer you make the chain that inserts your nasty code into higher-layer objects, the more complicated, fragile, and discoverable the attack becomes.

I see you skipped the part I made that point eh.

That is possible, but would require first having a security breach with access to the completed assembly to know _what traces_ first. At which point that security breach is slightly more concerning given it's higher level.


I wish that were true but it's really not. At least not within the public sector, maybe wealthier private firms can afford to do that level of verification.

Anyway, even then you still need to make trust decisions. How do you verify the ICs in your HDD haven't been tampered with? How do you know the firmware wasn't built with a malicious compiler? Or that a bad actor didn't add a backdoor to the firmware? Realistically there's a lot of components in modern computers that we have no choice but to trust.


Yes. Intel dismissed it at the time, saying that "nobody would ever have untrusted code running on the same hardware on which cryptographic operations are performed".

Still, it would be very hard to make sure that the provided code is indeed the one running on the suspicious machines. The only way I see to make sure of that would be to provide tools to compile and flash the hardware, which doesn't make much business sense. This also gives no protection to silicon based backdoors that has nothing to do with OS code.

Isn't this just the physical manifestation of "Trusting Trust" - the seminal paper on backdooring compilers?

It might be difficult, but who really inspects their own prints at a 100-micron resolution?

https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7... - there's plenty of HN discussions to be found too.


Computers that are ridiculously hard to hack and methods for building them have been available for some time. NSA used to evaluate them for years on end trying to hack them. If they passed the test, the got the TCSEC A1-class label for verified[-as-possible] security. SCOMP/STOP, GEMSOS, Boeing SNS Server, LOCK, and KeyKOS w/ KeySAFE are examples of systems designed with such techniques or certified. I think STOP-based XTS-400 and Boeing SNS Servers have been in deployment for decades now without any reported hacks. They were on the munitions list for export restriction (idk if enforced), some companies sell high-security to defense only, and they all cost a ton of money if proprietary.

Under newer standards and techniques, examples include INTEGRITY-178B, Perseus Security Framework (eg Turaya Desktop), seL4, Muen hypervisor, and ProvenCore. Two of those are open source with one having pieces that were released. On hardware side, the groups developing it often openly describe the stuff where anyone with money can implement it. Some openly release it like CHERI CPU w/ FreeBSD. Some commercial products making every instruction safe like CoreGuard.

So, it's not a question of whether people will develop and sell this stuff. They've been developing this stuff since computer security was invented back in the 1970's. It was sometimes the inventors of INFOSEC developing and evangelizing it. Businesses didn't buy the stuff for a variety of reasons that had nothing to do with its security. Sometimes, esp with companies like Apple or Google, they could clearly afford the hardware or software, it was usable for some to all of their use case, and they just didn't build or buy it for arbitrary reasons of management. Most stuff they do in-house is worse than published, non-patented designs which is just more ridiculous.

DARPA, NSF, and other US government agencies continue to fund the majority of high-security/reliability tech that gets produced commercially and/or for FOSS. These are different groups than the SIGINT people (i.e. BULLRUN) that want to hack everything. Also, they might be putting one backdoor in the closed ones for themselves while otherwise leaving it ultra-secure against everyone else. That's what I've always figured. Lots of it is OSS/FOSS, though, so that's easier to look at.


Historically, high-assurance security used a mix of commodity and custom hardware. SCOMP had IO/MMU plus type enforcement at memory & storage level. Congress mandated use of commercial off-the-shelf which forced ports to insecure architectures. Aesec's GEMSOS, one of first security kernels, did some kind of custom firmware when ported to x86. Paul Karger, one of INFOSEC's founders, decided on VMM's for easier security & legacy compatibility with modifications to PALcode. Many products, like INTEGRITY-178B, targeted PowerPC to get better hardware with cross-selling to aerospace. General Dynamics with NSA modified Intel stuff with misnamed HAP (Linux + VMware aint high assurance). Others are doing custom CPU's and firmware designed for security whereas Joshua Edmison made attachment that reuses high-performing CPU's.

So, there's a long history in high-assurance security of securing each layer. Mainstream security ignored it as usual until recently focusing on that stuff. Many smart folks among them are trying to secure software on backdoored CPU's while others (eg Raptor POWER, Cambridge CHERI) are trying to give us non-backdoored systems. At one point, I knew most of the latter since so few are working on that angle. Rarely fix root cause over tactical mitigations.


I’m worried this going to turn out to be an unsecured TC instance after the article makes it sound like the underlying software was compromised.

I'm out of my depth here, but since this conversation seems precise, and is discussing theoretical and practical implementations, does this paper[1] factor in? If so, is it fair to say that at some level, without auditing every aspect of hardware, and every aspect of all software used at build time and runtime, you can't know?

[1] http://cm.bell-labs.com/who/ken/trust.html


Do you think they had a special team for black box testing of critical processors ? I'd be curious to read how they assessed security from vendors.

Yes, there is, since "trusted computing" mostly refers to the idea of locking your running OS down so that attackers can't persist code that alters it. Which makes your clumsy "Bueller" thing all the funnier: it's like you literally can't imagine that hardware should be any more secure than an 80386.

I mean...maybe...so long as the critical software is sufficiently isolated from anything not appropriately vetted. If the untrusted software shares _any_ hardware with your critical system you're setting yourself up for a bad time.

Yes, but trust is delegated to hardware only instead of hardware and software.
next

Legal | privacy