Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

While I am aware about different levels of severity and the need for prioritising security bugs according to their severity, I wasn't aware about this kind of security model for LTS software where not all security bugs are fixed - seems quite illogical to me (unless, ofcourse, if you are making money of it as a service).


sort by: page size:

This is the dark side of risk based security which often manifests. The code gets audited over a day or two and items get ranked by severity. The “minor” bugs are fixed much later if never and it’s ok because they are low severity. But if you pull on the threads of minor issues there are usually deeper issues at play. Further bugs might be minor if considered independently, but can be chained together to perform severe exploits. Given the resources, I think the best approach is to just aggressively eliminate all known undefined behavior in a application

This is what I'm talking about. You, a security enthusiast, are only interested in the technical security details. You respond to a comment about tradeoffs with details on the bug.

I don't care about why the bug happened or how easy it is/isn't to fix. I care about whether the existence of the bug is something I should be so concerned about as to not use the software. In order to gauge that, I need a little more info about the threat level.


Security flaws are bugs. They should be prioritised along with the rest.

It mostly falls out of Linus's position that security bugs are just normal bugs, and the implications about the investment put into fixing them in a timely manner, or preventing them from happening in the first place. A little more concretely, off the top of my head:

* There is not much interest in proactive security-hardening of the like taken by grsec, PaX, etc. This is not to say that they should just pull grsec or PaX upstream wholesale, but that there's not much interest in solving the problems that lead to those things existing out-of-tree in the first place.

* There is no process for security review of new features. While things are casually reviewed, it's assumed that it's okay to go ahead and ship some new code, and new security holes can be fixed in the future just like functionality bugs can be fixed in the future.

* Stable point releases include everything from security fixes to added support for new hardware, and there is intentionally no distinction made between them. In many cases, there's an active attempt to obfuscate what fixes are security-related, on the grounds that they're all "normal" stable patches the way that security bugs are "normal bugs". This means that users (both people who use the stable kernels directly, and people who make distribution kernels) need to either diverge from the upstream stable point releases, or test thoroughly before shipping/deploying, which increases update latency significantly. Point releases cannot be used directly as urgent security updates.

* Not all security fixes are backported to stable point releases in the first place, because the development culture encourages thinking of them as "normal bugs", and not all normal bugs are obviously worth backporting fixes for (otherwise why have a separate branch). It's entirely possible to send a security patch to the kernel, not say it's security-related (because you're asked not to), and have it fail to go to stable. It's entirely possible for a patch to be quietly dropped because it's too much effort to backport -- again, the lack of difference from "normal bugs" means that there's no guarantee that security bug fixes will get backported.

* Aggressive commitment to an unstable kernel API, let alone ABI, means that stable point releases require rebuilding third-party modules, and may break them, and that there's absolutely no way for a good portion of users to even be on the latest stable point release. RHEL has its massive awful backport series, where the kernel in current, supported RHEL releases does not resemble either the original branch point or the current upstream very well, because their customers are paying them to for reliably being able to upgrade to RHEL-released kernels.

* As a result of the last two bullet points, there's a lot of divergence between the upstream kernel and what people actually run, which is what Linus is getting at in these posts, when he says, correctly, that him identifying security bugs would be more helpful to attackers than legitimate users. But there's no interest in making that not be the case in the first place.

I have a bunch of more minor complaints (the number of security misfeatures in the kernel like /dev/random and Linux "capabilities", which indicate the lack of a healthy security development culture, and the aggressive / personality-based management style, which may be conducive to certain development goals but cannot be conducive to security).

To be fair, this is a completely valid philosophy for a project to have. There are good reasons to decide that security is just not that important, compared to other things they could be doing. Most of this is being very intentional about letting security be a low priority. For instance, the kernel folks have very good reasons for not having a stable kernel API or ABI, because they don't want people using out-of-tree code, in pursuit of certain engineering goals. They understand there's a tradeoff between that and other engineering goals, like getting more users to run a current release, and they're fine with that.

(Whether it's a valid philosophy to choose to use such a project for secure systems is a different question, but this is just about their internal development philosophy.)

Firefox, on the other hand, is a project that very much wants people to be on the latest release all the time, and favoring security is one of the inputs to them in making that tradeoff. Firefox is a project where they can go from announcing that they've found a security bug to the patch being deployed on most users' machines within hours, and they've optimized for making that be the case. At that point they can handle security bugs differently from normal bugs, and expect that to be a net positive for users. They do have problems about stability of internal APIs (used by add-ons), but they're much more cautious about it, and there's no viable business in "You'll probably get security fixes in a timely fashion, but your add-ons will definitely still work". Firefox cares about architecting things to be secure by design, and they run their own enterprise release channel with the intention that even conservative deployments will use the binaries from that channel directly.


The problem is: what defines a security issue. A developer (more often then not) doesn't know if a fixed bug could have lead to a security issue.

Wait what? You think security fixes are a sign that software was built 'wrong'? Every piece of software has security bugs - it's the ones that never have any security fixes that I would be scared of.

From what I've seen, most security bugs chain a few normal bugs together. One oversight here, another there, combine them and you have an exploit.

The valid point I think Linus is getting at (perhaps I'm reading too much into it) is that if you solve normal, boring bugs, you prevent the much trumpeted security bugs from ever coming out. By heaping praise on the security team and ignoring the careful programmer who writes good code and eliminates bugs before they exist, Linus sees the community move towards a crisis-solving mode rather than a crisis-prevention mode. His view is that it's far better to focus on writing good, solid code that tests robustly rather than hyper-focusing on security at the cost of everything else. This is not to dismiss the importance of security; instead, it is a fundamental focus shift -- thus the example of OpenBSD, where Linus sees security paranoia has subsumed everything else.


My (uneducated about security) guess is that it is because most bugs are in code written in a language by a programmer, not runtime bugs or bugs in the language itself. And that when it comes to languages, how easy it is to shoot yourself in the proverbial foot could (should?) be looked at as being as (or more, depending on your POV?) important then how secure the runtime is.

I agree that it's good that the vulnerability has a transparent framework level fix, and I myself would rather see 2-3 more framework level bugs than a whole new bug class like mass assignment. Bug classes are usually worse than bugs.

But that is not what people mean when they say this isn't a severe bug. They mean, "I read some article where some guy said you needed the right HMAC key on a cookie to exploit the bug", and I think that article is wrong, and thus the assertion about severity is wrong.

It is a severe bug with an easy fix. Unfortunately I think there are some other bugs orbiting around it that don't yet have fixes.


Security issues aren't introduced intentionally, oftentimes they are found much later on in code that was assumed to be secure. Like the SSL heartbleed vulnerability. Once a vulnerability like that is discovered, you _want_ every developer to update their deps to the most secure version

Something I've been mulling over for a while: security vulnerabilities are basically the original developers getting outsmarted, caught out being careless. Even a very skilled, careful team might ship bugs that have security implications. But low-skilled, careless teams are definitely doing this. All buggy software is also vulnerable. There is no such thing as low-quality but secure.

Flaws are found regularly in security-centric software. Perhaps low level libs provided the improvements however.

I find this research to be both ethical and necessary, and I think kernel devs decrying it as wasting their time are missing the point. They demonstrated the whole "with enough eyes all bugs are shallow" security model doesn't really work. If some of the best devs out there fall for bad patches, it can happen to any OSS project (and for that matter for prop code bases too, but there you at least have to go through the hurdle of being hired before your code is considered).

Well I didn't want to reproduce the entire thread. The basic summary is Linus feels that labeling bugs as a security threat should not be pointed out as such, rather keep such information private, even though the fix is public.

Because life is short. If you want security patches, find a vendor to help filter the bad bugs from the big bugs for you.

In the meantime, hackers see the fix, recognize the security risk, and build exploits long before your vendors even recognize it as a potential security risk (probably only after it is exploited in the wild would they understand it as such). Good luck with that.


> At CoreOS Fest, Greg Kroah-Hartman, maintainer of the Linux kernel, declares that almost all bugs can be security issues.

That sounds like the opposite of Linus' thinking, which is "security bugs are no worse than any other type of bug."

They may sound similar, but one implies that most bugs can be dangerous, while the other implies that security bugs are not dangerous.


I’m not convinced that if I found a bug that I’d notice all the security implications of fixing it. Occasionally yes, but I wonder how many people have closed back doors just by fixing robustness issues and not appreciated how big of a bug they found.

A bug in their software would be forgivable. This article pointed out both an extremely poor design decision (lots of unnecessary code in the kernel) as well as a serious organizational problem (not doing vulnerability management). These are especially bad considering that they supposed to be a security company.

In both cases, one bad example means it's likely there are many more still undiscovered.


Sorry if I gave that impression. It was not my intention.

> A crash is a bug but not a security problem.

I think that all bugs, the ones that produces crashes and security bugs should be all treated equally. A bug is a bug, whenever it has security implications or not.

To me, the article gives the impression that a system crash is not a security problem, because a Rust program will "terminate in a controlled fashion, preventing any illegal access". But one for example, can fingerprint a system by forcing it to crash.

And of course, nobody expects that Rust will prevent bugs from happening, but at the same time I don't get why the fixation of setting a difference between security bugs and bugs.

"security problems are just bugs" - Linus Torvalds. (http://lkml.iu.edu/hypermail/linux/kernel/1711.2/01701.html)

edit: Linus reference.


I don't see how after a project matures, there should be no exploitable bugs? Even Java still has serious bugs on a yearly basis... The only secure systems are the ones nobody uses.
next

Legal | privacy