Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

A: They don't. https://lkml.org/lkml/2008/7/15/705

> IOW, when we fix security issues, it's simply not even appropriate or relevant to you.

- Linus



view as:

Include full quotes of relevant information - because context is important.

The very next sentence following your quote:

>More importantly, when we fix them, your vendor probably won't have the fix for at least another week or two in most cases anyway.

... ...

>We'd basically be announcing a bug that (a) may not be relevant to you, but (b) _if_ it is relevant to you, you almost certainly won't actually have fixed packages until a week or two later available to you!


Well I didn't want to reproduce the entire thread. The basic summary is Linus feels that labeling bugs as a security threat should not be pointed out as such, rather keep such information private, even though the fix is public.

Because life is short. If you want security patches, find a vendor to help filter the bad bugs from the big bugs for you.

In the meantime, hackers see the fix, recognize the security risk, and build exploits long before your vendors even recognize it as a potential security risk (probably only after it is exploited in the wild would they understand it as such). Good luck with that.


Mozilla is a vendor. Very few people get a Firefox rebuilt by a third party. (And, in fact, Mozilla takes steps to make that more cumbersome to do legally than just redistributing the upstream build.)

Linux is relatively unusual among security-sensitive free-software products in neither being a vendor nor wanting to be a vendor.


I don't know what Linux does, but Mozilla puts out security advisories for every release.

https://www.mozilla.org/en-US/security/advisories/


That works if you want to take as cavalier an approach to security as the Linux kernel community does, but many communities very rightly do not.

Would you care to explain why you think their current approach is cavalier? as I'm fascinated in a world where half our secure infrastructure (or more) is running on Linux of some description.

It mostly falls out of Linus's position that security bugs are just normal bugs, and the implications about the investment put into fixing them in a timely manner, or preventing them from happening in the first place. A little more concretely, off the top of my head:

* There is not much interest in proactive security-hardening of the like taken by grsec, PaX, etc. This is not to say that they should just pull grsec or PaX upstream wholesale, but that there's not much interest in solving the problems that lead to those things existing out-of-tree in the first place.

* There is no process for security review of new features. While things are casually reviewed, it's assumed that it's okay to go ahead and ship some new code, and new security holes can be fixed in the future just like functionality bugs can be fixed in the future.

* Stable point releases include everything from security fixes to added support for new hardware, and there is intentionally no distinction made between them. In many cases, there's an active attempt to obfuscate what fixes are security-related, on the grounds that they're all "normal" stable patches the way that security bugs are "normal bugs". This means that users (both people who use the stable kernels directly, and people who make distribution kernels) need to either diverge from the upstream stable point releases, or test thoroughly before shipping/deploying, which increases update latency significantly. Point releases cannot be used directly as urgent security updates.

* Not all security fixes are backported to stable point releases in the first place, because the development culture encourages thinking of them as "normal bugs", and not all normal bugs are obviously worth backporting fixes for (otherwise why have a separate branch). It's entirely possible to send a security patch to the kernel, not say it's security-related (because you're asked not to), and have it fail to go to stable. It's entirely possible for a patch to be quietly dropped because it's too much effort to backport -- again, the lack of difference from "normal bugs" means that there's no guarantee that security bug fixes will get backported.

* Aggressive commitment to an unstable kernel API, let alone ABI, means that stable point releases require rebuilding third-party modules, and may break them, and that there's absolutely no way for a good portion of users to even be on the latest stable point release. RHEL has its massive awful backport series, where the kernel in current, supported RHEL releases does not resemble either the original branch point or the current upstream very well, because their customers are paying them to for reliably being able to upgrade to RHEL-released kernels.

* As a result of the last two bullet points, there's a lot of divergence between the upstream kernel and what people actually run, which is what Linus is getting at in these posts, when he says, correctly, that him identifying security bugs would be more helpful to attackers than legitimate users. But there's no interest in making that not be the case in the first place.

I have a bunch of more minor complaints (the number of security misfeatures in the kernel like /dev/random and Linux "capabilities", which indicate the lack of a healthy security development culture, and the aggressive / personality-based management style, which may be conducive to certain development goals but cannot be conducive to security).

To be fair, this is a completely valid philosophy for a project to have. There are good reasons to decide that security is just not that important, compared to other things they could be doing. Most of this is being very intentional about letting security be a low priority. For instance, the kernel folks have very good reasons for not having a stable kernel API or ABI, because they don't want people using out-of-tree code, in pursuit of certain engineering goals. They understand there's a tradeoff between that and other engineering goals, like getting more users to run a current release, and they're fine with that.

(Whether it's a valid philosophy to choose to use such a project for secure systems is a different question, but this is just about their internal development philosophy.)

Firefox, on the other hand, is a project that very much wants people to be on the latest release all the time, and favoring security is one of the inputs to them in making that tradeoff. Firefox is a project where they can go from announcing that they've found a security bug to the patch being deployed on most users' machines within hours, and they've optimized for making that be the case. At that point they can handle security bugs differently from normal bugs, and expect that to be a net positive for users. They do have problems about stability of internal APIs (used by add-ons), but they're much more cautious about it, and there's no viable business in "You'll probably get security fixes in a timely fashion, but your add-ons will definitely still work". Firefox cares about architecting things to be secure by design, and they run their own enterprise release channel with the intention that even conservative deployments will use the binaries from that channel directly.


Thank you for the interesting response!.

The question then given that as you pointed out the kernels that most people run are divergent from upstream for various practical reasons and that getting upgrades out quickly onto systems that are not easy to upgrade very easily is why marking stuff as security is a good idea as it does paint a target on those people who can't upgrade straight away.

It seems that the Linux team have taken a practical approach to solving the problem by not flagging stuff as security critical and allowing the vendors to handle the packaging.

I'm not saying that the way they do it is perfect but given the unique 'fractured' way that the kernel ends up all over often in devices and systems that can't be upgraded the current approach works.

It's certainly an interesting question.


I suspect those problems are not completely unsolvable. There were broadly similar problems with the WebKit version used by Android, since that was baked into the OS, and vendors are slow to update OSes. Then Google decided that there would be no more independent Android browser, and apps would just reuse the same rendering engine as Chrome -- which could be updated through the Play Store. (Google's also been slowly trying to solve other problems related to vendors not updating Android.)

But they require some design work. Conservatism with regards to new features is part of it -- while a "normal bug" in functionality only impacts a system where the functionality was intended to be used, a security bug impacts those where it's compiled in, even if it's unused. Making the kernel more friendly to third-party kernel drivers would allow some of these systems to be upgradeable: Chrome updates in the Play Store don't break other applications because there's a well-defined ABI to WebView, and that's never broken. Designing some functionality (like filesystems) to run in a more microkernel style would make it safer to upgrade targeted parts of the kernel codebase without risking hardware incompatibilities. And so forth.

Again, it's an engineering tradeoff. Microkernels may be the most hackneyed example of "no, we don't want to do it that way". :)


Legal | privacy