It mostly falls out of Linus's position that security bugs are just normal bugs, and the implications about the investment put into fixing them in a timely manner, or preventing them from happening in the first place. A little more concretely, off the top of my head:
* There is not much interest in proactive security-hardening of the like taken by grsec, PaX, etc. This is not to say that they should just pull grsec or PaX upstream wholesale, but that there's not much interest in solving the problems that lead to those things existing out-of-tree in the first place.
* There is no process for security review of new features. While things are casually reviewed, it's assumed that it's okay to go ahead and ship some new code, and new security holes can be fixed in the future just like functionality bugs can be fixed in the future.
* Stable point releases include everything from security fixes to added support for new hardware, and there is intentionally no distinction made between them. In many cases, there's an active attempt to obfuscate what fixes are security-related, on the grounds that they're all "normal" stable patches the way that security bugs are "normal bugs". This means that users (both people who use the stable kernels directly, and people who make distribution kernels) need to either diverge from the upstream stable point releases, or test thoroughly before shipping/deploying, which increases update latency significantly. Point releases cannot be used directly as urgent security updates.
* Not all security fixes are backported to stable point releases in the first place, because the development culture encourages thinking of them as "normal bugs", and not all normal bugs are obviously worth backporting fixes for (otherwise why have a separate branch). It's entirely possible to send a security patch to the kernel, not say it's security-related (because you're asked not to), and have it fail to go to stable. It's entirely possible for a patch to be quietly dropped because it's too much effort to backport -- again, the lack of difference from "normal bugs" means that there's no guarantee that security bug fixes will get backported.
* Aggressive commitment to an unstable kernel API, let alone ABI, means that stable point releases require rebuilding third-party modules, and may break them, and that there's absolutely no way for a good portion of users to even be on the latest stable point release. RHEL has its massive awful backport series, where the kernel in current, supported RHEL releases does not resemble either the original branch point or the current upstream very well, because their customers are paying them to for reliably being able to upgrade to RHEL-released kernels.
* As a result of the last two bullet points, there's a lot of divergence between the upstream kernel and what people actually run, which is what Linus is getting at in these posts, when he says, correctly, that him identifying security bugs would be more helpful to attackers than legitimate users. But there's no interest in making that not be the case in the first place.
I have a bunch of more minor complaints (the number of security misfeatures in the kernel like /dev/random and Linux "capabilities", which indicate the lack of a healthy security development culture, and the aggressive / personality-based management style, which may be conducive to certain development goals but cannot be conducive to security).
To be fair, this is a completely valid philosophy for a project to have. There are good reasons to decide that security is just not that important, compared to other things they could be doing. Most of this is being very intentional about letting security be a low priority. For instance, the kernel folks have very good reasons for not having a stable kernel API or ABI, because they don't want people using out-of-tree code, in pursuit of certain engineering goals. They understand there's a tradeoff between that and other engineering goals, like getting more users to run a current release, and they're fine with that.
(Whether it's a valid philosophy to choose to use such a project for secure systems is a different question, but this is just about their internal development philosophy.)
Firefox, on the other hand, is a project that very much wants people to be on the latest release all the time, and favoring security is one of the inputs to them in making that tradeoff. Firefox is a project where they can go from announcing that they've found a security bug to the patch being deployed on most users' machines within hours, and they've optimized for making that be the case. At that point they can handle security bugs differently from normal bugs, and expect that to be a net positive for users. They do have problems about stability of internal APIs (used by add-ons), but they're much more cautious about it, and there's no viable business in "You'll probably get security fixes in a timely fashion, but your add-ons will definitely still work". Firefox cares about architecting things to be secure by design, and they run their own enterprise release channel with the intention that even conservative deployments will use the binaries from that channel directly.
Well I didn't want to reproduce the entire thread. The basic summary is Linus feels that labeling bugs as a security threat should not be pointed out as such, rather keep such information private, even though the fix is public.
Because life is short. If you want security patches, find a vendor to help filter the bad bugs from the big bugs for you.
In the meantime, hackers see the fix, recognize the security risk, and build exploits long before your vendors even recognize it as a potential security risk (probably only after it is exploited in the wild would they understand it as such). Good luck with that.
From what I've seen, most security bugs chain a few normal bugs together. One oversight here, another there, combine them and you have an exploit.
The valid point I think Linus is getting at (perhaps I'm reading too much into it) is that if you solve normal, boring bugs, you prevent the much trumpeted security bugs from ever coming out. By heaping praise on the security team and ignoring the careful programmer who writes good code and eliminates bugs before they exist, Linus sees the community move towards a crisis-solving mode rather than a crisis-prevention mode. His view is that it's far better to focus on writing good, solid code that tests robustly rather than hyper-focusing on security at the cost of everything else. This is not to dismiss the importance of security; instead, it is a fundamental focus shift -- thus the example of OpenBSD, where Linus sees security paranoia has subsumed everything else.
> So I personally consider security bugs to be just "normal bugs". I don't cover them up, but I also don't have any reason what-so-ever to think it's a good idea to track them and announce them as something special.
This is a constant issue: what is the safest course of action? You start by patching your shit first, then push it upstream, and then slowly broaden the disclosure circle. Large software vendors (RedHat, IBM, etc) and infrastructure operators (Amazon, Cloudflare, etc) get advanced notice. Only then is it distributed to the public.
This is the de-facto route for closed-source software, but Linus doesn't mark all security patches as such [0]. There are a lot of unpatched systems out there and Linus doesn't have the time to parse the exact ramifications of every security bug.
That being said, it's pretty clear that we need to invest in more robust software development methods.
Sweeping-under-rug is a common approach. It seems that in this instance, they have followed Linus Torvalds mantra, who once said: "I don't have any reason what-so-ever to think it's a good idea to track security bugs and announce them as something special. I don't think some spectacular security hole should be glorified or cared about as being any more special than a random spectacular crash due to bad locking."
Aye, too many people have this defeatist attitude that since perfect security will never be possible, therefore the only valid solution is reactive security (bug-patch cycles). Patch dependence is considered too entrenched for making some changes like replacing ambient authority with capabilities, using failure-oblivious computing [1] to redirect invalid reads and writes, using separation kernels, information flow control, proper MLS [2], program shepherding for origin and control flow monitoring [3] and general fault tolerance/self-healing [4].
I used to look up to Linus Torvalds as many did, but am increasingly beginning to see him as a threat to the advancement of the industry with his faux pragmatism that has led him to speak out against everything from security to microkernels and kernel debuggers.
>For example, at some point, Debian updated the fontconfig package by backporting an upstream fix for a memory leak. Unfortunately, the fix contained a bug that would crash Firefox and possibly other software too. We spotted the new crash only six days after the change landed in Debian sources and only a couple of weeks afterwards the issue had been fixed both upstream and in Debian. We sent reports and fixes to other projects too including Mesa, GTK, glib, PCSC, SQLite and more.
That sounds a lot like Linus' "many eyes make all bugs shallow" idea working as intended.
This does seem like the pretty obvious and inevitable outcome. Determining if a bug fix has security implications and if so how severe they are is often very difficult and all that you can confidently say is that you don’t know how to exploit it. Downstream consumers have tried to pressure the kernel maintainers into deciding which fixes need to be backported by demanding CVEs, but now they’re running into the problem of that they didn’t actually ask for the thing they really wanted since they knew that request would be (and has been) denied.
Wait what? You think security fixes are a sign that software was built 'wrong'? Every piece of software has security bugs - it's the ones that never have any security fixes that I would be scared of.
While I am aware about different levels of severity and the need for prioritising security bugs according to their severity, I wasn't aware about this kind of security model for LTS software where not all security bugs are fixed - seems quite illogical to me (unless, ofcourse, if you are making money of it as a service).
No one in that thread is recommending the Linux devs take the monolithic GRSecurity patches flat out. If you read the originally linked thread Daniel explains why it can't be accepted this way nor does he propose it should.
Rather attempts to submit it in smaller patches have been met with disinterest. As well as the fact security in general has the appearance of being sidelined by the core developers - which has created a large disincentive for developers interested in getting GRSecurity upstreamed from even trying (again).
You think security fixes are a sign that software was built 'wrong'?
Of course. If it needs a fix, it was built wrong. We've become too accepting of low-security software. There's no excuse for this in embedded devices that don't do much.
Well Xen for instance includes a reference to the relevant security advisory; either "This is XSA-nnn" or "This is part of XSA-nnn".
> If the maintainers start to label commits with "security patch" the logical step is that it doesn't require immediate action when the label is not there. Never mind that the bug might actually be exploitable but undiscovered by white hats. If you do not want to rush to patch more than you have to, use a LTS kernel and know that updates matter and should be applied asap regardless of the reason for the patch.
So reading between the lines, there are two general approaches one might take:
1. Take the most recent release, and then only security fixes; perhaps only security fixes which are relevant to you.
2. Take all backported fixes, regardless of whether they're relevant to you.
Both Xen and Linux actually recommend #2: when we issue a security advisory, we recommend people build from the most recent stable tip. That's the combination of patches which has actually gotten the most testing; using something else introduces the risk that there are subtle dependencies between the patches that hasn't been identified. Additionally, as you say, there's a risk that some bug has been fixed whose security implications have been missed.
Nonethess, that approach has its downsides. Every time you change anything, you risk breaking something. In Linux in particular, many patches are chosen for backport by a neural network, without any human intervention whatsoever. Several times I've updated a point release of Linux to discover that some backport actually broke some other feature I was using.
In Xen's case, we give downstreams the information to make the decisions themselves: If companies feel the risk of additional churn is higher than the risk of missing potential fixes, we give them the tools do to so. Linux more or less forces you to take the first approach.
Then again, Linux's development velocity is way higher; from a practical perspective it may not be possible to catch the security angle of enough commits; so forcing downstreams to update may be the only reasonable solution.
Aren't a huge percentage of software bugs potential security vulnerabilities by this standard? I understand the wisdom of trying to get patches pushed out before publicly disclosing major exploits in mission critical software, but it seems unreasonable to expect users to not make statements of the form "program X crashes when I do such-and-such" in public discourse.
One big difference is explained in the article. The fact that code going into the kernel is not as secure as we might hope is already known to the open source community. Maintainers are overworked and none would be surprised if you told them that it would be possible to smuggle in backdoors. This is not a "bug", but an issue with time and resources, and because the researchers attempted to add bugs to demonstrate it just makes it worse.
On the other hand, security researchers are finding vulnerabilities that weren't previously known. They've discovered specific exploitable bugs, rather than introducing new ones. Following disclosure, the company can patch the vulnerabilities and users will be safer. Which makes that a laudable thing to do.
* There is not much interest in proactive security-hardening of the like taken by grsec, PaX, etc. This is not to say that they should just pull grsec or PaX upstream wholesale, but that there's not much interest in solving the problems that lead to those things existing out-of-tree in the first place.
* There is no process for security review of new features. While things are casually reviewed, it's assumed that it's okay to go ahead and ship some new code, and new security holes can be fixed in the future just like functionality bugs can be fixed in the future.
* Stable point releases include everything from security fixes to added support for new hardware, and there is intentionally no distinction made between them. In many cases, there's an active attempt to obfuscate what fixes are security-related, on the grounds that they're all "normal" stable patches the way that security bugs are "normal bugs". This means that users (both people who use the stable kernels directly, and people who make distribution kernels) need to either diverge from the upstream stable point releases, or test thoroughly before shipping/deploying, which increases update latency significantly. Point releases cannot be used directly as urgent security updates.
* Not all security fixes are backported to stable point releases in the first place, because the development culture encourages thinking of them as "normal bugs", and not all normal bugs are obviously worth backporting fixes for (otherwise why have a separate branch). It's entirely possible to send a security patch to the kernel, not say it's security-related (because you're asked not to), and have it fail to go to stable. It's entirely possible for a patch to be quietly dropped because it's too much effort to backport -- again, the lack of difference from "normal bugs" means that there's no guarantee that security bug fixes will get backported.
* Aggressive commitment to an unstable kernel API, let alone ABI, means that stable point releases require rebuilding third-party modules, and may break them, and that there's absolutely no way for a good portion of users to even be on the latest stable point release. RHEL has its massive awful backport series, where the kernel in current, supported RHEL releases does not resemble either the original branch point or the current upstream very well, because their customers are paying them to for reliably being able to upgrade to RHEL-released kernels.
* As a result of the last two bullet points, there's a lot of divergence between the upstream kernel and what people actually run, which is what Linus is getting at in these posts, when he says, correctly, that him identifying security bugs would be more helpful to attackers than legitimate users. But there's no interest in making that not be the case in the first place.
I have a bunch of more minor complaints (the number of security misfeatures in the kernel like /dev/random and Linux "capabilities", which indicate the lack of a healthy security development culture, and the aggressive / personality-based management style, which may be conducive to certain development goals but cannot be conducive to security).
To be fair, this is a completely valid philosophy for a project to have. There are good reasons to decide that security is just not that important, compared to other things they could be doing. Most of this is being very intentional about letting security be a low priority. For instance, the kernel folks have very good reasons for not having a stable kernel API or ABI, because they don't want people using out-of-tree code, in pursuit of certain engineering goals. They understand there's a tradeoff between that and other engineering goals, like getting more users to run a current release, and they're fine with that.
(Whether it's a valid philosophy to choose to use such a project for secure systems is a different question, but this is just about their internal development philosophy.)
Firefox, on the other hand, is a project that very much wants people to be on the latest release all the time, and favoring security is one of the inputs to them in making that tradeoff. Firefox is a project where they can go from announcing that they've found a security bug to the patch being deployed on most users' machines within hours, and they've optimized for making that be the case. At that point they can handle security bugs differently from normal bugs, and expect that to be a net positive for users. They do have problems about stability of internal APIs (used by add-ons), but they're much more cautious about it, and there's no viable business in "You'll probably get security fixes in a timely fashion, but your add-ons will definitely still work". Firefox cares about architecting things to be secure by design, and they run their own enterprise release channel with the intention that even conservative deployments will use the binaries from that channel directly.
reply