Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This is the dark side of risk based security which often manifests. The code gets audited over a day or two and items get ranked by severity. The “minor” bugs are fixed much later if never and it’s ok because they are low severity. But if you pull on the threads of minor issues there are usually deeper issues at play. Further bugs might be minor if considered independently, but can be chained together to perform severe exploits. Given the resources, I think the best approach is to just aggressively eliminate all known undefined behavior in a application


sort by: page size:

While I am aware about different levels of severity and the need for prioritising security bugs according to their severity, I wasn't aware about this kind of security model for LTS software where not all security bugs are fixed - seems quite illogical to me (unless, ofcourse, if you are making money of it as a service).

I'm reminded of ESR's quip "given enough eyeballs, all bugs are shallow." And that's often true for projects that have obvious functionality and for which you're not worried about cross cutting concerns like security or safety. I just remember a decade of working with federal contractors trying to disabuse them of the idea that they could just grab some random code off the internet and assume it was coded well enough to avoid simple, impactful vulnerabilities.

I’ve also encountered that a few times where a fairly anodyne bug in a codepath prevents a serious security bug from being reachable. With my attacker hat on it is very tempting to just report the first one…

I think you have a very limited understanding of the state of software security in general if you think that is true.

Bugs far worse than this are discovered on a weekly basis. This one is just easy to exploit so it’s been hyped up by people who would normally ignore slightly more technical descriptions of much worse vulnerabilities.


I think he meant scan the source code for security issues and then report those bugs one by one ...

The problem is: what defines a security issue. A developer (more often then not) doesn't know if a fixed bug could have lead to a security issue.

I don't see how after a project matures, there should be no exploitable bugs? Even Java still has serious bugs on a yearly basis... The only secure systems are the ones nobody uses.

Security flaws are bugs. They should be prioritised along with the rest.

The excitement comes when security issues are discovered in no longer maintained code

Flaws are found regularly in security-centric software. Perhaps low level libs provided the improvements however.

Something I've been mulling over for a while: security vulnerabilities are basically the original developers getting outsmarted, caught out being careless. Even a very skilled, careful team might ship bugs that have security implications. But low-skilled, careless teams are definitely doing this. All buggy software is also vulnerable. There is no such thing as low-quality but secure.

Yea, some of these (e.g. PHP) have dozens of minor/bug fix releases which then just suddenly stop which just shows that you're vulnerable all the time... either because the release isn't "mature yet" or because it's "already eol" :)

Yeah, but that's exactly the problem. While the maintainer waits for that one real-world use case, others are hoarding the vuln or using it stealthily enough to not raise alarms.

The problem itself was interesting, but a compromised compiler is absolutely a nightmare. It took that long to figure out even with blatant evidence of something fishy. How long does it persist when the vulnerability is subtle?

It wasn't a simple code review, it was a vulnerability that existed in code unnoticed for a number of years. It required skilled security researchers to unearth it. Vulnerabilities exist unnoticed in a number of foundational OS projects like this, and it's only when a CVE is released that people realize it had been there for quite some time.

I agree that it's good that the vulnerability has a transparent framework level fix, and I myself would rather see 2-3 more framework level bugs than a whole new bug class like mass assignment. Bug classes are usually worse than bugs.

But that is not what people mean when they say this isn't a severe bug. They mean, "I read some article where some guy said you needed the right HMAC key on a cookie to exploit the bug", and I think that article is wrong, and thus the assertion about severity is wrong.

It is a severe bug with an easy fix. Unfortunately I think there are some other bugs orbiting around it that don't yet have fixes.


It is one of the greatest failings of our industry that we are apparently incapable of constructing programs that are in any sense "complete" due to requiring vast amounts of upkeep just to ensure their most basic possible property: that their semantics are defined completely by their source code and not inputs (as is not the case with programs containing remote code execution or escaping bugs like SQL injections, XSS, etc). This constant upkeep not only has an enormous cost in terms of time spent modifying these programs to move them asymptotically closer to a version with clearly defined semantics, but (unlike other classes of bugs which involve incorrect behavior that do not have implications for security) produce an air of mistrust and a general malaise and lack of confidence in all software.

One direct implication of this fact, for programs which must be constantly updated to fix their security flaws, is that because the only semblance of trust and security you can obtain in using this software comes from its popularity and reputation (because these things generally mean a large amount of resources can be spent auditing and testing ("many eyes make all bugs shallow"), and because a large userbase means that any bugs found will be more valuable and more difficult to find, and thus less likely to be used against people who are not profitable to be hacked, which is most people), viably forking large projects becomes impossible without a similar or proportional amount of resources to the original project. Without that, having confidence in the new project becomes much more difficult, because you cannot receive an assurance that bugs found in the original project will be fixed in the fork, and because bugs found in the fork might be easier to find (because of less thorough testing, because of fewer resources) and so of less value, and so have a higher chance of ending up in the hands of low-level skiddies who might use them even less responsibly than e.g. the NSA (who has never directly installed ransomware on anyone's computer, that I know of). None of this of course is mentioning the effect this culture of fear and mistrust (and its basis in reality) has on computer users who truly are valuable targets, and must use computers to do their job knowing full well that they are a mire of both intentional and unintentional security vulnerabilities and that somewhere out there exists a string of exploits that can be used to exfiltrate their IP address and send them straight to the gulags.

Another cost of this unfortunate facet of computer programming is that for programs which cannot be directly updated (for example, their source code is not available), modifying them so that they are closer to a version with clearly-defined semantics is extremely difficult, involves specialized skills which are not possessed by most programmers, and generally just doesn't happen very often. Although one could also make this an argument in favor of source code availability, in any case this is a very common occurrence.

So, I don't think at all that the problem is this culture of foolishly thinking that we actually know how to write programs - the problem is that we can't. The solution then, at least in the long term, is learning how to write complete programs that do not require constant adjustments in order to ensure clearly defined semantics, not to just figure out how to stay afloat better in this horrifying quagmire of shifting quicksand that we've built the foundations of computer software on.


From what I've seen, most security bugs chain a few normal bugs together. One oversight here, another there, combine them and you have an exploit.

The valid point I think Linus is getting at (perhaps I'm reading too much into it) is that if you solve normal, boring bugs, you prevent the much trumpeted security bugs from ever coming out. By heaping praise on the security team and ignoring the careful programmer who writes good code and eliminates bugs before they exist, Linus sees the community move towards a crisis-solving mode rather than a crisis-prevention mode. His view is that it's far better to focus on writing good, solid code that tests robustly rather than hyper-focusing on security at the cost of everything else. This is not to dismiss the importance of security; instead, it is a fundamental focus shift -- thus the example of OpenBSD, where Linus sees security paranoia has subsumed everything else.


Sweeping-under-rug is a common approach. It seems that in this instance, they have followed Linus Torvalds mantra, who once said: "I don't have any reason what-so-ever to think it's a good idea to track security bugs and announce them as something special. I don't think some spectacular security hole should be glorified or cared about as being any more special than a random spectacular crash due to bad locking."
next

Legal | privacy