The website serves JS and does not serve it over https, and discusses Spectre bug and how to patch it. I know he is the second most important guy on linux, but the irony.
And you probably know much better than the second most important guy on linux what security means on the Internet: securing some javascript script on a random blog.
It's not about that guy's blog. This causes desensitization to non-HTTPS traffic and when people then actually visit non-HTTPS malicious blog, they get infected. If all "trusted" websites were HTTPS, then whenever there was untrusted access, people will notice it and raise alarm.
I get the irony, but if you could magically visualize the entire internet security threat matrix, this would fall so far down the list and his other work is so high in terms of impact, that it would absolutely no sense for him to take even one minute away from his other activities to address this.
Yes, hence my question. I have tried to setup a HTTPS service and it seems incredibly complicated if you have your own domain name. Even with lets encrypt, if you are using github pages for hosting but your own custom domain, you are out of luck. The point is not that he is not serious about security, point is, it is too hard to get normal security correct. And I am not talking about "cryptography is hard". I am talking about tools which should be easy and standardized. Those are hard.
Almost all Linux Mint users "piggy back" on top of the most recent Ubuntu Long Term Support (LTS) release. Currently the most recent Ubuntu LTS is 16.04 and by default it uses the 4.4 Linux kernel. There are patches for the 4.4 kernel from the kernel.org team and the Ubuntu team are testing and integrating these patches. The most recent updates from the Ubuntu team about Meltdown and Spectre is available from this URI: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAn...
Right - for desktop use though, there are Firefox and Chrome updates with mitigation. JavaScript exploits were the most dangerous desktop scenario.
For servers running Ubuntu, what is the risk, as long as my services don't run arbitrary user uploaded executables? As far as I can tell it is that a different remote code execution exploit can now read the entire memory, possibly leaking secrets. Assuming we have a kernel update in the next few days, I would need to install it immediately and rotate passwords and keys. Should I revoke TLS certs? Is that paranoid?
I think it's naive to think you're completely protected just because code isn't supposed to ever run. It seems as though the simplest and safest piece of mind is to use some extra layers of protection ala SELinux.
This won't stop the memory from being accessed, but it has a better chance of stopping things that can exploit the bug(s) in the first place.
Revoking TLS certs is probably a little bit on the side of paranoia.
I think you're on the right track -- just watch for the kernel update, and rotate passwords plus keys if it's not a hassle.
It’s naive to assume that your system is perfectly secure in preventing unauthorized code from running. But that’s just in general.
He is right, though. It would take two vulnerabilities to pwn him: one allowing remote code execution and then another (Spectre/Meltdown) to gain access to privileged data that shouldn’t be available in that context.
Too many machines put on too many hats. A single (physical) “secure” server should do as little as possible and run as small a codebase as possible. And never run - sandboxes or otherwise - code that isn’t authorized.
We seem to be forgetting that in all this. If you only run code you trust, you are safe. This can only happen if you run in trusted code on your machine. We’ve taken running untrustworthy code in a “sandboxed” environment to mean “not running untrusted code, when it’s totally not the case.
Things are definitely getting lost in the panic here. It's going to take several weeks for everyone to get their head on straight, but yes, PTI is only going to be justified in certain situations, and if you don't allow untrusted code to run on your system (most servers), you will probably be fine with just your host CPU patched because no one will get the chance to run the exploit.
Of course, if an attacker uses a remote execution vulnerability to get into the box with user-constrained permissions, this can be used to read guest memory without concern for user limitation, so in that way it's a long-term pernicious threat that will make local exploitation significantly easier, and security-conscious organizations will still opt to use it despite the fact that they don't run untrusted code. Also, since this would allow them to read all memory in the guest, if you have sensitive stuff like database credentials coming into memory, they could be sniffed without requiring further exploitation.
Also, consider that at present, PTI is disabled by default for AMD chips, based on AMD's assurances that Meltdown does not affect them. If you're running in the cloud and your host is AMD-based, you don't need PTI in either the guest or the host.
"Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible."
I always preferred Mint for desktop Linux. I knew they had a poor security record due to their HTTP downloads at one time, but no kernel security updates is news to me.
If you must use desktop Linux, I guess I'll be recommending Ubuntu from now on. In my testing Mint and Ubuntu both work pretty well out of the box on lots of different hardware configurations unlike other distros. But learning this really tips the scale, Ubuntu or bust.
A great advantage of linux is allowing the end user choice of distribution and desktop environment to find their specific needs. Unfortunately, not all choices are good. I wish Mint would go away.
> However there are lots of systems out there that are not running “normal” Linux distributions for various reasons (rumor has it that it is way more than the “traditional” corporate distros). They rely on the LTS kernel updates, or the normal stable kernel updates, or they are in-house franken-kernels. For those people here’s the status of what is going on regarding all of this mess in the upstream kernels you can use.
Note that `dmesg` prints from a fixed-size ring buffer, so if your system has significant other output (e.g. if you create and destroy lots of devices or such), it's very possible the boot-time message about page table isolation will have fallen out of the ring buffer.
If you use journald, it by default saves the kernel's ring buffer to disk, so you can use it to check for that message:
That command checks for the "bugs: cpu_insecure" entry in /proc/cpuinfo. However, that line only appears in some of the kernel versions. Recent kernels will have either "cpu_insecure" or "cpu_meltdown" (the name has been changed), while for instance the 3.10 kernel from CentOS 7, which has a backported version of these patches, doesn't even have the "bugs:" field.
And it's that 3.10 kernel which has all the workarounds (both for Spectre and Meltdown), while the more recent kernel has only what's been upstreamed, which so far is only the Meltdown workaround.
Someone please correct me if I'm wrong, but both spectre and meltdown seem to me to be local root exploits, not remote vulns. They can be used to break out of (say) a VM into the host hypervisor, and thence into other VMs running on the same hardware, but cannot be used to break into a machine from outside its hardware perimeter. Is that right?
Essentially yes, the exploits require code running on the machine that’s attacked. However, for example JavaScript runs on the local machine and is a demonstrated attack vector.
It’s also strictly speaking not a privilege escalation, it’s “see things you’re not supposed to.”, such as all sorts of secrets. The attacker does not gain any write or execution privileges, though.
JS as an attack vector can be effectively mitigated by denying access to high-precision timers (and a few features that can be used to construct high-precision timers). At least Chrome and Firefox are doing this, so just make sure that you are keeping your browser up to date.
If you feel paranoid, you might also want to disable JS by default and only enable by whitelist on any machines that hold particularly sensitive or valuable data.
Also: if you don’t have high precision timers, you probably could also go and increase the number of guesses to cancel out noise. After all, this is a statistical side channel attack. This will be fun for quite some time.
Doing that is harder than usual, because the time before rollback is very limited, reducing the amount of cache lines you can pull into the cache and you can measure a given cache line only once.
The attack is already pretty damn slow (1kB/s) in C, if you have to collect a statistically significant sample in JS it might well slow down enough to not really work properly.
Yeah, it's possible to use to bypass same origin policy and read data across origins. Basic idea is that most browsers have multiple origins in the same process, meaning that the same speculative execution + optimistic caching can leak data to js from another origin in the same process.
Does that mean that we are soon going to be vulnerable to remote exploits via WASM as well? Do mitigation attempts such as reducing the JavaScript timing resolution aslo apply to WASM?
All that these mitigations did was disable all obvious precise timers. All you need for a new attack is a novel way to create a precise timer. And since it’s now known how valueable such a timer would be, I’m certain that minds more clever than me will take a shot at this.
My understanding is that they require execution on the machine and even systems like node are vulnerable(https://react-etc.net/page/javascript-spectre-meltdown-faq) The big issues is that this is really hard to detect because it blends with what would be normal code patterns. I have read that AV will have trouble with this in heuristics but may pick up on specific strains of code that utilize it. Plus this is a whole new class of vulnerability that people have not thought about yet, it will be interesting, can I make your server do X to expose this(think Heartbleed)
Yes, but arbitrary code execution exploits are some of the most commonly found. If your system interacts with the outside world, or is a multi user system, I would consider it prudent to protect against both Meltdown and other Spectre class exploits whenever possible.
1) the vulnerability is local, not directly tied to spreading as malware (but these days placing JavaScript in an ad is easier and possibly more effective than a virus...)
2) there is no such thing as "exposition carefully publicized so the flaw is not exploitable by malicious hackers". Just assume that black hats are as smart as white hats or smarter.
> 2) there is no such thing as "exposition carefully publicized so the flaw is not exploitable by malicious hackers". Just assume that black hats are as smart as white hats or smarter.
You're missing the point. Not doubting the smartness of black hats, white hats likely took their time to discover the flaw. If you make the details public in a controlled manner and then announce the fixes shortly after, you essentially did not give black hats enough time to fill the missing pieces of the public announcement.
For instance, in the extreme case, the statement "we have discovered a flaw at the hardware/CPU level in such and such chips, and we call it meltdown and spectre", it's pretty obvious the black hats would have no clue what it is. (They may have already discovered it on their own, and may have named the flaws something completely different. Even then they wouldn't know if white-hats discovered what they discovered.)
unfortunately the fixes are in an open source project so even if all they knew was that there was a CPU level exploit and that specific code fixed it, black hats would still have a weaponised exploit before most people had patched their systems.
What I'd like to know is how effective are these OS updates (both Linux and Windows) without the associated firmware updates through microcode or BIOS/UEFI flashing. My system is a few years old and I don't expect the OEM to release BIOS/UEFI updates for this model.
Will the OS/microcode update still at least partially protect me or will I have to be super-paranoid about apps and javascript for the remainder of this machine's life?
BTW, my machine is just a normal desktop/laptop, so no server stuff running or expected.
I share your concern. Based on my admittedly loose understanding, although the Spectre vulnerability is "more difficult" to exploit, I suspect it's a matter of time before exploits become more common. And the Spectre vulnerability is the one that requires a BIOS/firmware update.
So that's a bit alarming for me since at least one of my several workstations is using a 5+ year old desktop board (an Intel board, ironically) that has been end-of-lifed according to its BIOS downloads support page. Meanwhile, some of the bigger server motherboard vendors (e.g., SuperMicro) are not yet prepared to supply firmware updates for recent motherboards.
Recognizing that the operating system vendors/maintainers are not responsible for our systems' BIOS and hardware, the flippant advice to "talk with your device's vendor" gives me a sinking feeling. (E.g., Microsoft: "Consult with the device manufacturer about the firmware version that has the appropriate update for your CPU.") As a technical person, I am going to find it difficult to bring all of my hardware—especially the older hardware—up to snuff for coping with these vulnerabilities. I fear for people who are less technically inclined.
So that's a bit alarming for me since at least one of my several workstations is using a 5+ year old desktop board (an Intel board, ironically) that has been end-of-lifed according to its BIOS downloads support page
Microcode updates can be applied on boot through the BIOS/EFI firmware, but also by the operating system. Linux (I don't know about Windows) has facilities to update the microcode on boot:
While the regular approach to getting this microcode update is via a BIOS update, Intel realizes that this can be an administrative hassle. The Linux operating system has a mechanism to update the microcode after booting. For example, this file will be used by the operating system mechanism if the file is placed in the /etc/firmware directory of the Linux system.
Only kinda effective. Retpoline on Skylake and later isn't completely reliable as a mitigation for one of the spectre variants, and IBRS is one of the processor features that will supposedly correct that.
You'll still be vulnerable, it'll just add another layer of difficulty to the exploit.
The microcode updates that people have been mentioning are not updates to your motherboard's firmware, EFI or otherwise. They are updates to the code that runs inside your central processor chip, the so-called microcode, that does the work of understanding and enacting processor instructions (in all programs, from the programs in your firmware to the programs that you download and run from the WWW).
Firmware updates are largely irrelevant to this issue, only being involved in the sense that one way to perform microcode updates is for your machine's firmware to upload the new microcode image file. But that is just one way for that to be done; your operating system can do it, too.
As I understand the microcode updates get installed by Linux distros and Windows by updates. Microcodes are loaded by the CPU at every boot.
The question is, how can one check if the CPU already got the microcode update?
Can a OS running inside a VM (over Intel VT ring-1) patch the CPU's microcode? (e.g. Linux host, and Win guest patches CPU's microcode) Nothing seems impossible anymore.
AFAIK on Intel it's not possible to load a microcode update over the top of one already loaded by your firmware, so if the firmware shipped an older update, the firmware really needs to be updated. (Please someone correct me if this isn't the case -- I can't even remember where I read this)
Nope, the OS or hypervisor (but not a guest) can freely update whenever they like, as long as all the updates are to newer versions. For updates that modify visible CPU features, like these, it's helpful to apply them before boot. Linux has a mechanism to apply "early" updates from initramfs that effectively function before the kernel boots. I don't think other OSes can do that.
Windows, at least for right now, does not seem to be updating CPU microcode during boot. If you run the PowerShell Get-SpeculationControl script that gives you the status of the updates it states that the CVE-2017-5715 fix requires hardware support. If you don't have it you are told to get a BIOS update from your OEM. WIthout it, the status for CVE-2017-5715 is listed as not enabled due to missing hardware support.
Of all of my computers only one has been issued a BIOS update so far. I ran the PowerShell script before and after installing the BIOS. Installing the BIOS flipped the status of the fix for CVE-2017-5715 to on.
I also ran the free HwInfo utility that will show the microcode revision before and after installing the Windows update and before and after installing the BIOS. It only ever changed with the BIOS update.
I hope this changes and Windows does start updating CPU microcode. I also have a few older motherboard's that aren't likely to ever see a BIOS update again.
This appears to be my experience too. I'm checking a few VMs and they are showing as missing hw support.
Do note that Microsoft appears to have updated their doc in the last day or so. They are now saying that three registry settings need to be set (instead of 2 previously).
VMware distributes a driver[1] one can install to patch the microcode on Windows at runtime. If these updates need to be recognized before OS boot, this may not mitigate the dangers. But in my test lab, HWiNFO64 shows the current microcode version.
The Intel[2] and AMD[3] microcode can be downloaded but I can't vouch for how many Meltdown/Spectre mitigations these versions contain. The Intel one is dated 2017-11-17.
Your distribution probably has a firmware package that has this bootloader stub. It applies the latest microcode updates just before your OS actually begins starting up (but for Linux after the Kernel has initialized things and is preparing to hand off to the initrd).
Yes, this is something that needs to happen at every boot; they don't get burned in to the CPU.
I downloaded the microcode updates from Intel, but hesitate to apply them because one of the expected directories does not exist on Manjaro, so I'm out of my element on how supported the update would be. Do you happen to know if in general microcode updates are distributed in packages? It seems that's probably the case, and if so, I'm fine waiting.
This just reminded me: when will Ubuntu (and Debian?) fix apt’s broken kernel update process? I have never seen a kernel update - security or otherwise - installed via a normal “apt update; apt upgrade” on any of our machines, it’s always “the following updates have been held back” and then it’s time to manually use dpkg to install the relevant updates.
“apt full-upgrade” will upgrade everything. It may also remove packages to fix higher priority dependencies, though I’ve never seen this happen in real life.
Happened to me yesterday with my Proxmox host. It wanted to install firmware-linux-free, which conflicted with pve-firmware. It decided it wanted to remove pve-firmware which would also completely remove proxmox-ve... Had to upgrade with --no-recommends.
It's not supposed to install via "apt upgrade", as that command won't pull in new packages or remove packages. Unlike most, kernel packages can co exist so they are all separate packages. Thus need the extra "apt-get dist-upgrade" or "apt full-upgrade" variants as they are new dependencies.
"apt upgrade" (unlike "apt-get upgrade") does pull in new packages, but it won't remove existing ones.
I don't see why "apt upgrade" wouldn't install the new kernel. It's hard to tell what when wrong without knowing any details.
Anyway, if apt decided to hold back a package for some reason, most of the time you should be able to install it with "apt(-get) install". Do not install packages with dpkg unless you know what you're doing.
You need to run dist-upgrade or some other variant that will install new packages to pull in kernel upgrades. The reason is that each new kernel is its own package, depended on by (in the case of Ubuntu for example), the linux-generic package. apt upgrade wont install new packages, dist-upgrade will.
other options include "apt upgrade --with-new-pkgs" or using unattended-upgrades to automatically install them.
apt's kernel update process is not broken, you've just somewhat borked your system either by removing the kernel meta package or messing with repository sources/priorities.
Make sure "linux-image-amd64" package is installed for Debian (and whatever equivalent package name for Ubuntu) and "apt upgrade" will install new kernel packages just fine.
(And this is actually not even needed for most kernel updates in Debian because most of the time package name stays same and updated in place, package name is only changed if ABI is changed and requires recompilation of DKMS modules)
If those 4 extra days of embargo would not have caused a total PR disasters on Intel's side will eat my socks. This wouldn't have become any better given another week of time.
The fact that Intel is rumored to have know about this since June lends credence to your belief, however if that week would have let OS vendors have updates ready, and cloud providers already patched, then it would be less of a disaster. There's a certain amount of flailing about when details are released earlier than expected, and vendors don't have a answers or instructions prepared.
Reading between the lines would be figuring out who at or around Intel profits from this chaos. There is certainly someone, but since everybody is just focussed on the technical issues, whoever it is will probably throw a party this weekend.
No, one does not need to read in between the lines for that, either. One can simply read the lines. Far from everyone being "just focussed on the technical issues", the questions over the share dealings have made the news in the papers, from The Australian to The Irish Times.
Insider trading happens all the time. But usually you don't read about it in the news. So that the Intel CEO's insider trading got him in trouble shows there's someone fighting against him. Thus, no that's not the full story. I know it's hard to accept but the official scape goat is rarely the trouble maker. Or do you believe that the 2008 crisis was just the doing of that one banker that got jailed?
The 2008 crisis was an emergent systemic risk. (Similar to Spectre) But Meltdown is Intel's fuckup, and the insider trading is on Krzanich. And naturally anyone is happy to point that out, even if they sort of fucked up by implementing speculative execution and thus allowed Spectre on their platform.
Or it can be simply someone that noticed this unusual sale, and saw the circumstances ripe for a bit of naked shorting.
Right. Shorting is also an option. Haven't thought about it. It would also be insider trading, though, and it's not in the news that someone shorted Intel stock big time.
It's not the only operating system where the process is slightly more complex than a kernel update. Windows NT updates require the coöperation of other softwares on one's machine. (-:
I'm not pointing at Debian. I think it is even possible to run Jessie with a current kernel, (which is quite cool) but I haven't tried it. I just upgraded all machines to Stretch and called it a day.
I just did the apt-get update; apt-get upgrade dance in a hurry yesterday and thought I might be good. My post was more a reminder to everyone not be lulled into a false sense of security...
Well, you only need to cover yourself between now and whenever you can get your hands on an AMD chip. That cpu/board swap will be much more work anyway.
I use AWS instances (multi-tenant). I understand that by now AWS hypervisors have been patched.
Does that fully protect my unpatched AWS instances from this CPU-level issues?
If not, is there any way to protect my AWS instance from a rogue unpatched attacker instance running on the same hypervisor?
In other words, with the current CPUs deployed at AWS, will it be possible for an attacker to simply launch an unpatched instance to steal data from other instances (patched or not) running on the same AWS hypervisor?
I hope not, because the cloud multi-tenant model would be effectively dead until AWS hosts with unaffected CPUs are available.
In general, the multi-tenant security model does not rely on a malicious tenant being limited to any OS or patch (they can and always will be able to run whatever code they want including OS, and it's designed to still be secure). The cross-VM attacks we've seen actually go through the hypervisor, and since that is patched you should be fine if you're patched.
No. The "master" software fix to Meltdown keeps userland and the kernel from trivially sharing kernel memory pages. If your guest kernel doesn't have this fix applied, then your guest userland shares pages with your guest kernel, and guest processes can dump kernel memory.
In theory it should be possible for a hyper visor to retrofit a fix into a guest, but it's messy and I doubt anyone will ever do it. Could be fun though.
Quick sketch: disable EPT and go back to shadow paging. Maintain a third page table with any kernel pages unmapped. Invisibly swap between on syscalls.
We applies it to the bare metal DB servers for my.zerotier.com and there is definitely a detectable increase in IO cost. Reducing syscall overhead is going to become even more important.
From my cursory reading I understand it is a cleverly orchestrated timing attack.
In other words, if something would need 500 picoseconds you have bit 1, if it is 250 picoseconds instead it is bit 0 (numbers pulled out of thin air).
This is made possible because processors execute the read speculatively even if it is actually forbidden. This read causes a cache hit. Of course the read is never brought into effect because first it is forbidden and second it is in a branch which will never be executed. At the time of the speculative read the CPU seems not to have enough information to know not to execute that read. Even speculatively.
And that speculative read has a side effect on the cache which is measured by the exploit.
Of course the read is never made visible to the process because later the CPU knows not to bring it into effect. However then it is too late: there is measurable difference in caching times.
Timing the L1 cache response for speculatively fetched data that one could not normally access is just one of the problems. It's the one that it is easiest to explain, and it's also the one that the few operating system writers who were told about this tackled first. Hence it is the one that is getting the focus in many discussions.
Yes, you understand it correctly. The bug is really just that cache hits that are caused by speculative execution do not get invalidated. This is technically a bit of a speed-up, but it leaves information behind about the speculative execution that took place.
It is also worth understanding the differences between Spectre and Meltdown, since they are distinct. Spectre refers to reading process memory via measuring cache timing after speculative execution. Meltdown, in conjunction, refers to the fact that Intel CPUs do not verify access rights to a virtual address until after the speculative execution (and thus the cache hit) takes place. Meltdown allows bypassing page permissions so you can read any page mapped in your virtual address space. Spectre is 'limited' to only reading addresses you already have access too.
No. That would be the ideal situation, but it is not possible to make the CPU do that just from a microcode update, it would require new hardware. So we're left with creating other ways to get around it via software. The perf hit comes from those software fixes.
Meltdown is the most expensive perf wise, but more straight forward to fix. Processes can only access addresses mapped in their address space, so if you in unmap the kernel while a user process is running then they can't read it. This is expensive because every syscall now flushes the TLB due to changing the page table, so page accesses are in general slower.
Spectre is more complicated to fix. One part of the fix is the retpoline hack that basically attempts to defeat the branch predictor through clever code. It looks like the CPUs are also getting microcode update to allow it to disable the branch predictor in some situations.
> As for how this was all handled by the companies involved, well this could be described as a textbook example of how NOT to interact with the Linux kernel community properly. The people and companies involved know what happened, and I’m sure it will all come out eventually
I'll eat my hat if it isn't some version of "here is a giant patch set we never worked through the chain nor asked about at all please make it your problem to mainline it immediately, followed by, well, a good old fashioned LKML go fsck yourself."
FYI, Red Hat (and CentOS) already have updated kernels with mitigation.
And that made me asking how Red Hat (and the free derivative CentOS) could have already patched their current kernel (on Jan 4th) while the patch was still being discussed on kernel ML the same day?
I don't see how they could come up with new kernel with mitigation for the three variants, excellent KB articles on the CVEs, tunables flag, preliminary performance test, etc. without having already patches implemented and tested for some time.
Do they have internal kernel devs? And so, do Redhat have another implementation?
(This is my first comment on HN, I hope I'm following adequately the guidelines)
Kernel ml is public - embargoed/sensitive issues are typically disclosed on closed lists - and discussed internally. RedHat does have many devs that work on the kernel, as does other distros.
Typically, a sensitive issue will be disclosed privately, and devs for various groups will share progress privately. Then ideally, all distros will have patches and documentation ready on the agreed date of public disclosure.
The bug was disclosed to various organizations in early november so work was going for some time. And yes, Red Hat makes significant contributions to the Linux kernel. Red Hat maintains their own version of the kernel and integrating those changes back to the mainstream version can take time to reach consensus.
There's such a thing called: 'embargo period', wherein CVEs are (in most cases) responsibly disclosed to various parties---including several Linux distributions---much ahead than the general public, to coordinate security errata.
So you can be assured that Red Hat coordinates with upstream. It also has a serious "Upstream First" policy (with sensible exceptions, of course).
As to whether Red Hat has kernel developers, of course it does. :-) Much of Red Hat's popularly began as a Linux distribution. If you're curious, the LWN publishes statistics about who contributes to the Linux kernel, and the most recent one is for the 4.11 development cycle -- https://lwn.net/Articles/720336/
It's now called coordinated disclosure, for reasons.
And, correct me if I'm wrong, but apparently calling Linus Torvalds and Greg KH the general public (as far as Spectre and its (poor for now) mitigations are concerned) is the new norm. I'm not sure of what the community will think of that. And of the corporate actors involved in that curious choice.
Again, I am pretty sure that they were disclosed the details. This however is very different from being able to commit patches on the day of the disclosure. For one, neither Linus nor Greg are actually writing the code.
The problem here was that discussion was confined within company barriers (with small occasional exceptions) until last Wednesday. Linux thrives because it is based on reaching consensus among all stakeholders, and not being able to discuss the approach to take with respect to Spectre directly resulted in chaos.
I am very happy with the work that we at Red Hat did on this project (though of course not perfect in any way!) and I am proud that our articles are published openly and are being widely shared among the Linux user community at large. However, I certainly would have preferred to work together with all other vendors on a common solution before. But unfortunately, even if (as is the case with Red Hat) upper management is extremely supportive of upstream collaboration, you do what you can.
I should have used the term 'coordinated disclosure', which probably reflects reality more. The links (with remarks from tptacek) pointed out by JdeBP (thanks) up-thread are a useful read.
I really dislike the term "responsible disclosure" in this context. I might consider it responsible to notify a software vendor about a vulnerability in their software so they have a chance to issue a fix to their customers before the rest of the world finds out. But this situation is different. These vulnerabilities affect everyone, and only a special few were allowed to prepare in advance. That's just preferential disclosure.
> These vulnerabilities affect everyone, and only a special few were allowed to prepare in advance. That's just preferential disclosure.
In this particular case, most of the fix had to be done in the operating system (the new microcode only enabled extra functionality needed by part of the operating system fixes), so it makes sense that operating system developers were allowed (and required) to prepare in advance. The three most relevant operating systems are Windows, OSX, and Linux; for Linux, one of the most important distributors is Red Hat. That gives two of the groups which were notified in advance: hardware (Intel, AMD, ARM) and operating systems (Microsoft, Apple, Red Hat, a few others).
It will be pretty unfortunate if it turns out that the projects that maintain a kernel (FreeBSD, various others) only received notification at Christmas, while various Linux distros (who have to deal with packaging, release, QA but not developing their own kernel patch since that comes from upstream) got a long warning period. It seems that way... Looking forward to reading about how this played out when the dust settles.
There is basically no independent upstream (other than a handful of people). Essentially all the kernel developers work for the companies that have distros plus organizations like Intel, Qualcomm, etc. that do a lot of device enablement.
I agree in general. In this particular case, though, a good fraction of the work was done by me and tglx. I'm independent. Tglx is sort of independent.
This is not to diminish the work done upstream by less independent people. Dave Hansen, in particular, is the one who actually got the code to function.
There was a post to OpenBSD- tech list telling no BSDs were told anything. And a blog post from Canonical says that the patch will be available 9th January (IIRC).
The lay public, aka everyone, isn't considered to be experts in this domain to address the issues. Hence, they disclosed to top tier OS distributions and partners.
It's similar to the Recalls of Takanas airbags in this regard. The manufacturers are informed and they tell us the consumer to bring in our cars. I'm not qualified to replace an explosive airbag, even though I drive a car daily as a lay person.
There are other OSes and affected companies beyond the ones who were chosen to receive advance warning.
Imagine how it would work out in your airbag analogy if only Toyota and GM knew, and they withheld information for months while they worked on their own fixes, meanwhile the public remains in danger and other automakers have no chance to implement their own recall plans.
Can you mention which OS didn't get early enough warning? I'm sure there is some niche ones which may miss out but they are usually not the target of hacking either. And many of the derivative Linux distributions don't do active kernel development and just compile the stock kernel or derive from Red Hat, Ubuntu or others. The more people that are informed early, the more likely it will leak out early.
Does it mean that a website with this kind of script can access all of my browser session? For instance, if I log in to my bank account, then I logout and about 10 minutes after I visit an evil website. Could evil website retrieve the URL of my bank account + the credentials?
Also, is there a POC that can display all browser history?
As far s I can tell, "Linux" (and MacOS) are especially vulnerable because each process maps in the entire kernel. This isn't true for more modern and microkernel OSs (Including Windows 10).
One thing I'll reiterate: as Greg mentioned, the backports to kernels prior to 4.14 are derived from a rather old KAISER version. They do not match what 4.14 and 4.15 do. This has several consequences.
1. They will have bugs. There's a reason PTI was heavily modified from the old KAISER code. They will also tend to diverge from upstream just because the code is so different. This means that the next time low-level x86 changes need to be backported, it'll be a huge mess.
2. There is only minimal upstream support. I, for example, am already largely ignoring two bugs in the backports that aren't in the upstream version. Why? Because I have no affiliation with a distro using an old kernel.
3. Contrary to its marketing, KAISER does not effectively mitigate the old kASLR leaks. PTI very nearly does, and I intend to improve it further once I find some time to do so. I doubt those improvements will get backported to pre-4.14 kernels.
4. At least some versions of "KAISER", on meltdown-affected hardware, expose the kernel stack to userspace. If that's not usable for rooting a box, I'll eat my hat. KPTI doesn't have this problem.
If you can put pressure on your organization or suppliers to update to 4.14 or better, please do so. Red Hat, especially, should seriously consider moving to 4.14 for RHEL 8.
It seems like a smallish consortium of poorly coordinated developers built the Spectre mitigation patches. Upstream wasn't involved until a couple days ago.
So if you're running a ARM64 chromebook, the answer appears to be: get fukt.
"For the 4.4 and 4.9 LTS kernels, odds are these patches will never get merged into them, due to the large number of prerequisite patches required. All of those prerequisite patches have been long merged and tested in the android-common kernels, so I think it is a better idea to just rely on those kernel branches instead of the LTS release for ARM systems at this point in time."
What bothers me is the lack of confirmation on the microcode updates (other than 'soon') – apparently they're out there[1], but Intel's own package has yet to be updated[2]. The 20171215 update carried by some distros only seems to cover HSX/BDX/SKX, so does that mean regular HSW/BDW/SKL users are screwed, are they covered by the 20171117 update, or are the new microcodes simply not ready for all models yet?
Of course, the microcode updates are meaningless without the corresponding kernel patches, but apparently only RHEL and SLES were deemed worthy of receiving those ahead of time, having already rolled them out while Linus and co are left scrambling to integrate the IBRS and retpoline code dumps after the fact.
Just to add to that, it looks like some manufacturers have already pushed out the new microcode alongside the recent ME fixes.
I flashed my Skylake laptop a few days ago, and now I'm on microcode rev 0xc2 (versus 0xba in the latest Intel tarball). CPUID output suggests IBRS support (according to [1]).
I've been thinking about some of this. A possible design to prevent some of these issues would be to read all protected memory, in speculative execution paths, as dummy all-zero bits value, and not put anything into the cache. I.e. no actual access takes place to any protected area and no breadcrumbs are left in any CPU storage such as a cache. (Then if that path is actually taken, generate the exception).
The whole problem is that the kernel/user protection is just a sham that is not being taken seriously by the CPU implementation. It allows access to take place without a proper mode change. Basically the whole fetch-decode-execute machinery (all of its pipelines, caches, registers and other areeas) should be regarded as untrusted and prevented from accessing or storing anything unauthorized according to the current security state of the CPU.
The kernel/user enforcement by the CPU is a sham (kinda) in Intel CPUs when the data you want to access is in L1 (people are guessing). This is only for the meltdown issue.
Spectre doesn't care about kernel/user at all. So even other CPUs are susceptible. Every CPU that speculates is vulnerable in theory.
reply