No, this line of thinking is basic threat modelling and stops us wasting time and effort on navel gazing when it would be better spent on things we can control. An invasive, non-destructive physical attacker also has access to (likely unencrypted) drives, memory buses, HIDs, audiovisual inputs...
'Fixing' this doesn't make your machine any less pwned if you let them touch it.
I don't disagree with other parts of your post, but I still think protecting against the scenario where an attacker has physical access to your computer is basically pointless. Especially if it comes with a very significant loss of freedom.
If a malicious person has entered your home or workplace, access to your computer should be low on the list of worries.
Yes, I think that would be a valid way to bypass the protection.
With physical access you can bypass just about any protection given enough money and time. In a data centre context, the damage you can do is rapidly minimised by rapidly increasing the amount of capital and time required to access more of the DC.
The more important change is that without this feature, malware could theoretically install itself into the firmware without requiring physical access. Now it should be just about impossible to break the chain of trust without a person physically tampering with the machine.
Note: I should mention that I think this is such a massive double edged sword (maybe double edged shield is a better term). This lets you build a threat model that accounts for everything up to physical access. This however also has such a massive opportunity to be an incredibly anti-consumer feature that I fear to see how it will be used. I wish they would have required a physical switch to enable/disable the feature. I do however understand how adding such a feature could complicate its implementation quite a bit.
> You should not do that since there is no reason to disallow the user from doing what they want.
For desktop computing ("personal computing", if you like), anyone here will agree.
But the article is specifically talking about securing appliances, and generally when talking about appliances, you're talking about rackable machines sold to the enterprise. There, they don't care a jot about being able to muck around with the machine - the whole point of an appliance is that you plug it in and go; that is more or less manages itself.
And for many of these customers, and of course any operating in a high-security environment (e.g. defence), this level of security is a feature, not a hindrance.
While I generally agree (even though it's locked doors all the way up for me and there are even easier ways I could compromise machines), it's foolish to disregard remote code execution vulnerabilities just because physical access is not secured.
The idea isn't to solve that problem, but to limit the damage significantly.
I should be able to give a Russian mob hacker on crack access to my machine without worrying too much about them doing anything I don't give them permission to do.
I'm very skeptical of any attempts to secure already compromised machine. It's just unnecessary complications for user, bloat for software and determined attacker is likely to overcome them anyway.
And a thing you can do for machines that have built-in keyboards is refuse to enable new HID devices until the user provides affirmative consent. The people who have reason to care about these attacks have defenses, and research that demonstrates those defenses are incomplete is useful research.
But even if you leave it in, everything is still protected in hardware, and in addition, malware can't trigger a physical presence button push...so, it is in fact significantly less secure...
Not when it comes at the expense of security. Perhaps there are contexts where security is not important and this rule does not apply, but it clearly is a problem for CPUs.
The other thing is when you have hackers in your network, shutting down the things that could be physically dangerous if they access it, until you know what you are facing and you identified the compromised machines, doesn't seem unreasonable to me.
If I should disable hyperthreading because of a hypothetical risk from an experimental proof of concept, shouldn't I also throw my PC into a dumpster because I don't know if the microcontroller controlling my LCD isn't also amplifying and Van Eck'ing my desktop to Chinese and Russian superspies waiting in a van outside?
After all, I'm not _SURE_ I know what it's doing.
Perhaps you don't particularly care about that risk, or you don't feel the risk is plausible enough to warrant the loss of one's computer, which is fair, but the risk is there nonetheless.
Why would a l33t haxxor waste their time on an esoteric and academic attack when they can just get their victim to click on something?
What's riskier? Systems that are never patched with still plenty of potential surface attack area, or systems that are uncontrollably patched with still plenty of potential remote and MITM attack area? I can't decide, but in both cases, removing remote access removes the problem. My car and toaster wouldn't be much more useful with Internet.
Man I dunno. This sounds right and all, but after years of seeing security issues that don't seem to have anything to do with unnecessary attack surface, I have to say that this just seems unrealistic to me. The problem is that no software runs on a machine without an internet connection, and you can't control the attack surface of other software on the machine.
'Fixing' this doesn't make your machine any less pwned if you let them touch it.
reply