> We believe that the attacker used information from Bugzilla to exploit the vulnerability we patched on August 6...The version of Firefox released on August 27 fixed all of the vulnerabilities that the attacker learned about and could have used to harm Firefox users.
I find this an interesting attack vector for a capital-O Open Source project. In the spirit of openness, how does a community correctly label bugs versus vulnerabilities?
Include full quotes of relevant information - because context is important.
The very next sentence following your quote:
>More importantly, when we fix them, your vendor probably
won't have the fix for at least another week or two in most cases anyway.
... ...
>We'd basically be announcing a bug that (a) may not be relevant to you,
but (b) _if_ it is relevant to you, you almost certainly won't actually
have fixed packages until a week or two later available to you!
Well I didn't want to reproduce the entire thread. The basic summary is Linus feels that labeling bugs as a security threat should not be pointed out as such, rather keep such information private, even though the fix is public.
Because life is short. If you want security patches, find a vendor to help filter the bad bugs from the big bugs for you.
In the meantime, hackers see the fix, recognize the security risk, and build exploits long before your vendors even recognize it as a potential security risk (probably only after it is exploited in the wild would they understand it as such). Good luck with that.
Mozilla is a vendor. Very few people get a Firefox rebuilt by a third party. (And, in fact, Mozilla takes steps to make that more cumbersome to do legally than just redistributing the upstream build.)
Linux is relatively unusual among security-sensitive free-software products in neither being a vendor nor wanting to be a vendor.
Would you care to explain why you think their current approach is cavalier? as I'm fascinated in a world where half our secure infrastructure (or more) is running on Linux of some description.
It mostly falls out of Linus's position that security bugs are just normal bugs, and the implications about the investment put into fixing them in a timely manner, or preventing them from happening in the first place. A little more concretely, off the top of my head:
* There is not much interest in proactive security-hardening of the like taken by grsec, PaX, etc. This is not to say that they should just pull grsec or PaX upstream wholesale, but that there's not much interest in solving the problems that lead to those things existing out-of-tree in the first place.
* There is no process for security review of new features. While things are casually reviewed, it's assumed that it's okay to go ahead and ship some new code, and new security holes can be fixed in the future just like functionality bugs can be fixed in the future.
* Stable point releases include everything from security fixes to added support for new hardware, and there is intentionally no distinction made between them. In many cases, there's an active attempt to obfuscate what fixes are security-related, on the grounds that they're all "normal" stable patches the way that security bugs are "normal bugs". This means that users (both people who use the stable kernels directly, and people who make distribution kernels) need to either diverge from the upstream stable point releases, or test thoroughly before shipping/deploying, which increases update latency significantly. Point releases cannot be used directly as urgent security updates.
* Not all security fixes are backported to stable point releases in the first place, because the development culture encourages thinking of them as "normal bugs", and not all normal bugs are obviously worth backporting fixes for (otherwise why have a separate branch). It's entirely possible to send a security patch to the kernel, not say it's security-related (because you're asked not to), and have it fail to go to stable. It's entirely possible for a patch to be quietly dropped because it's too much effort to backport -- again, the lack of difference from "normal bugs" means that there's no guarantee that security bug fixes will get backported.
* Aggressive commitment to an unstable kernel API, let alone ABI, means that stable point releases require rebuilding third-party modules, and may break them, and that there's absolutely no way for a good portion of users to even be on the latest stable point release. RHEL has its massive awful backport series, where the kernel in current, supported RHEL releases does not resemble either the original branch point or the current upstream very well, because their customers are paying them to for reliably being able to upgrade to RHEL-released kernels.
* As a result of the last two bullet points, there's a lot of divergence between the upstream kernel and what people actually run, which is what Linus is getting at in these posts, when he says, correctly, that him identifying security bugs would be more helpful to attackers than legitimate users. But there's no interest in making that not be the case in the first place.
I have a bunch of more minor complaints (the number of security misfeatures in the kernel like /dev/random and Linux "capabilities", which indicate the lack of a healthy security development culture, and the aggressive / personality-based management style, which may be conducive to certain development goals but cannot be conducive to security).
To be fair, this is a completely valid philosophy for a project to have. There are good reasons to decide that security is just not that important, compared to other things they could be doing. Most of this is being very intentional about letting security be a low priority. For instance, the kernel folks have very good reasons for not having a stable kernel API or ABI, because they don't want people using out-of-tree code, in pursuit of certain engineering goals. They understand there's a tradeoff between that and other engineering goals, like getting more users to run a current release, and they're fine with that.
(Whether it's a valid philosophy to choose to use such a project for secure systems is a different question, but this is just about their internal development philosophy.)
Firefox, on the other hand, is a project that very much wants people to be on the latest release all the time, and favoring security is one of the inputs to them in making that tradeoff. Firefox is a project where they can go from announcing that they've found a security bug to the patch being deployed on most users' machines within hours, and they've optimized for making that be the case. At that point they can handle security bugs differently from normal bugs, and expect that to be a net positive for users. They do have problems about stability of internal APIs (used by add-ons), but they're much more cautious about it, and there's no viable business in "You'll probably get security fixes in a timely fashion, but your add-ons will definitely still work". Firefox cares about architecting things to be secure by design, and they run their own enterprise release channel with the intention that even conservative deployments will use the binaries from that channel directly.
The question then given that as you pointed out the kernels that most people run are divergent from upstream for various practical reasons and that getting upgrades out quickly onto systems that are not easy to upgrade very easily is why marking stuff as security is a good idea as it does paint a target on those people who can't upgrade straight away.
It seems that the Linux team have taken a practical approach to solving the problem by not flagging stuff as security critical and allowing the vendors to handle the packaging.
I'm not saying that the way they do it is perfect but given the unique 'fractured' way that the kernel ends up all over often in devices and systems that can't be upgraded the current approach works.
I suspect those problems are not completely unsolvable. There were broadly similar problems with the WebKit version used by Android, since that was baked into the OS, and vendors are slow to update OSes. Then Google decided that there would be no more independent Android browser, and apps would just reuse the same rendering engine as Chrome -- which could be updated through the Play Store. (Google's also been slowly trying to solve other problems related to vendors not updating Android.)
But they require some design work. Conservatism with regards to new features is part of it -- while a "normal bug" in functionality only impacts a system where the functionality was intended to be used, a security bug impacts those where it's compiled in, even if it's unused. Making the kernel more friendly to third-party kernel drivers would allow some of these systems to be upgradeable: Chrome updates in the Play Store don't break other applications because there's a well-defined ABI to WebView, and that's never broken. Designing some functionality (like filesystems) to run in a more microkernel style would make it safer to upgrade targeted parts of the kernel codebase without risking hardware incompatibilities. And so forth.
Again, it's an engineering tradeoff. Microkernels may be the most hackneyed example of "no, we don't want to do it that way". :)
Mozilla keeps critical security bugs closed to a small-ish group of developers and testers until they are patched. Soon after a patched version of Firefox has been released, the bugs are opened to the public. This has been our policy for about 15 years.
Linus's law applies (if it's even true, which there's insufficient empirical evidence for) to code, not bug trackers. The kernel's reporting process for security bugs involves sending an email to security@kernel.org, a closed list with secret membership. It may also involve sending an email to the linux-distros@openwall mailing list, which is also closed but has public membership (and is run by a third party). If either of these groups has a bug tracker at all, they haven't told anyone else.
And unlike Mozilla's policy, at no point is the history of discussions on either security@kernel.org or linux-distros@openwall ever opened up to the public. Fixes can, and are often intended to, show up in the kernel changelog without any mention of what security issue they fixed.
My, that sounded like a very boring title. I suppose saying something positive like "Improving security for bugzilla" is a better way to present a story than "someone has managed to attack Firefox users since August 6".
So this is about how someone hacked Bugzilla, extracted sensitive information about a vulnerability in Firefox' PDF engine, wrote a weaponized exploit for it, deployed it as malvertising and stole some of the most sensitive files you can get out of a user's computer...
...and then the FAQ document linked is a PDF file with no styling beyond something that could be done with HTML <h1>, <p> and <a>, hosted on a domain "ffp4g1ylyit3jdyti1hqcvtb-wpengine.netdna-ssl.com".
How valuable is a Firefox vulnerability? On one hand, it seems very high: It gives you access to hundreds of millions of computers. On the other, maybe the supply of vulnerabilities that provide such access is high enough, unfortunately, that they aren't worth much.
I ask because, if they are extremely valuable, I wonder if Bugzilla can be adequately secured against the attackers it would attract. Perhaps it would be best to store this information elsewhere until they are ready to make it public.
> We believe that the attacker used information from Bugzilla to exploit the vulnerability we patched on August 6...The version of Firefox released on August 27 fixed all of the vulnerabilities that the attacker learned about and could have used to harm Firefox users.
I find this an interesting attack vector for a capital-O Open Source project. In the spirit of openness, how does a community correctly label bugs versus vulnerabilities?
reply