Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Intel Publishes Microcode Patches, No Benchmarking or Comparison Allowed (perens.com) similar stories update story
1318.0 points by jeswin | karma 8728 | avg karma 6.13 2018-08-23 03:05:16+00:00 | hide | past | favorite | 499 comments



view as:

I'm really curious how Intel could even imagine this is enforceable. For instance, if I have a server with shell access for many users, am I supposed to forbid my users from publishing benchmarks? If they do, am I liable since I "agreed" to the license? Or are they, even though they never "agreed" to the license? It just doesn't make sense.

Surely it's just designed to scare the big media websites from publishing numbers.

This is censorship, plain and simple. They cannot command everyone to abridge their speech, even if their license terms say so. It is unenforceable and illegal.

It's also crazy to my mind. The reality is these various issues are all having potentially significant impacts on performance -- and it's not just the microcode related changes, but also the kernel changes, etc. It is kind of ridiculous to require everyone to do their own testing and not allow basic numbers to be published.

I also find it fascinating in that in theory your BIOS update can include these changes.. does this anti benchmark license apply if you reflash a new BIOS or just buy a new motherboard with the new bios and then use the same type of CPU to compare?

Makes me want to look at some BIOS and motherboard EULAs now...


This is a very good point. What about replacing a motherboard (but not CPU) due to motherboard failure or just to get some extra feature or other? If the new BIOS includes the microcode updates are you legally forbidden from benchmarking your old CPU?

Very good point. I'd like to add that this is not just kind of ridiculous, it is just plain ridiculous. Also, we have the right to publish any benchmark numbers, not just basic numbers. This isn't a time for weak wording.

I mean, they can probably say whatever they want, but this is not a legal thing - all they can do is refuse to sell you more hardware for violating their contract.

NDA's are legal right? This is like a permanent NDA. I wonder if people can refuse to download the microcode because of the license, and then sue Intel for the vulnerability. I'm thinking probably not.

Might want to get your next chips from AMD.


Given that the dissenting justice for "TransAm Trucking v. Administrative Review Board" is now on the supreme court, I'm thinking contracts will indeed supersede protections (unless every possible permutation is explicitly enumerated in legislation as if forever battling the "literal genie").

They also basically sold a defective part, and to fix it you have to agree to something additional post-sale. That is pretty damn questionable.

It wouldn't work. Even online media gets pretty strong first amendment protections that mean Intel wouldn't have a complete open and shut case, and we tech journalists are smart enough to be able to get the same microcode updates through other channels that don't have the same strings attached. If it's meant to deter anybody, it's the big corporate customers and competitors.

Unfortunately, the First Amendment does not protect against private action, only government restraints on speech. Other mechanisms like anti-SLAPP laws might help with that, but either way that's a lot of legal effort to publish some benchmarks. Intel also operates all over the world, so they could eg. sue a Britain-based branch of some media outlet that also publishes the numbers if the laws there are more in their favor.

The First Amemdment itself doesn't directly apply to Intel, but there are a lot of relevant legal protections pertaining to the general idea of protecting the press. In particular, this seems to be fundamentally a copyright license that's at issue. News reporting, scholarship and research are all explicitly listed as purposes that can qualify as fair use. Using copyrighted material to research and report on the nature of defective goods being sold to the public strikes me as pretty likely to be ruled as fair use, especially when using the copyrighted microcode in a manner contrary to Intel's license is necessary to properly fact-check a news story about their processor flaws.

Online media often relies on hardware directly from the manufacture when they get to test it before official release.

Nothing stops Intel from not sending them anything anymore, and then they have to buy it from the stores like everybody else.


I'm well aware of the theoretical possibility of Intel blacklisting publications. In practice, it only works against smaller publications and would backfire spectacularly if they tried it against the larger publications. Intel has more to lose than any one tech publication.

> Intel has more to lose than any one tech publication.

I agree completely, but I wouldn't be surprised if waive threats of lawsuits around. And even though the media should be protected, it might still be relatively expensive for them.


The license can't restrict third parties from sharing benchmarks, which is why it puts the onus on the user not to allow third parties to share them. If some news site was to publish benchmarks without disclosing the source, Intel would first have to take them to court to force them to disclose who provided said benchmarks. That's as far as I can see it directly impacting sites that don't run their own benchmarks. That said, the sites that do run their own benchmarks would be on the hook. Sadly, even if this is unenforceable, the potential legal battle to have it declared so would be scary enough to quash some criticism.

This really makes me doubt if I should buy Intel products in the future (to the extent that I have a choice). If I can't get performance information because Intel has something to hide, I'll have to look elsewhere. Really, this is sufficiently distasteful behavior to make me avoid Intel even if the products work just fine.


> Sadly, even if this is unenforceable, the potential legal battle to have it declared so would be scary enough to quash some criticism.

I have seen this happen in so many cases all over the world. Supreme Court orders can be ignored by the government agencies and even private parties as long as no one drags them to court. Spending a 100 million dollars is nothing for the government or influential private parties as big as Intel. The small guy, however, will be bankrupted.

Be it the government or the big corporate, it is effectively the public money being used against the public. How absurd.

This shows that money is necessary for justice. This is dangerous.

Why can we not have systems that detect such frauds and automatically discipline such entities? It is not like such violations are happening behind closed doors of a small house in an inaccessible jungle. These violations are public.


This becomes doubly interesting once those users are in a different jurisdiction. I highly doubt a EU court would be willing to uphold such a bullshit term for a US company. Especially since many EU countries have quite strong laws against unexpected/unreasonable licensing terms.

I'd rather assume that it is in place to keep kernel and distro devs at bay.

Or you buy a CPU that was previously patched, even by Intel themselves.

Or you outright ignore the stipulation and do whatever the hell you want. Quite frankly I hope to see some benchmark articles that literally start with profanity telling Intel where to stick it.

This, definitely. This whole thing is bullshit. What are they going to do, sue the hell out of absolutely everyone who publishes any sort of benchmark about their hardware ? What do they want for themselves, become a lawyer's cabinet like Oracle ?

The CPU isn't patched, not permanently. This update must be loaded after every processor reset.

Well the license specifically says you will not permit a third party to either.

I don't really speak legalese, but does permit include having to then make all of your own users agree not to to avoid a penalty?

As mentioned in the other comment thread though, I imagine the reality of this clause is to prevent media outlets (such as Phoronix who would traditionally do exactly this kind of benchmarking) from downloading the microcode and publishing numbers directly.


> Well the license specifically says you will not permit a third party to either.

That might as well read as "you can't provide cloud computing" since you can't know what someone is going to execute on their server before they execute it!


And nobody releases production code to a cloud environment without performance testing of significant changes.

You could say they don't have to publish the results... Although it only differs the problem. If I build my SAAS on top of a cloud service that runs on Intel, and my customer complains about perf results of my software before and after, and makes the result available somehow (knowing that it's on updated microcode or not), who is responsible ? The cloud provider, myself, or my customer ?

Etc etc. This is a legal mess and a strong attack on freedom.


Not only that but standard development processes involve testing code against different environments.

Comparing arbitrary code execution times on this platform versus another could easily be called 'benchmarking'.

It's just so strange to sell a general purpose processor but then prohibit certain types of code depending on the state of mind of the developer. The same block of code could be permitted or prohibited based on the intent. It's nuts and a legal morass that is hard to imagine any lawyer proposing as a good idea.


> I don't really speak legalese, but does permit include having to then make all of your own users agree not to to avoid a penalty?

As a service provider, you will need to inform your existing users about this restriction and put the restriction in your user agreement for new users. After that you can relax, if any of your users publish benchmarks, you'll have to warn the user and then take the benchmarks out. You don't have to actively search for violations, but if you notice one on your own, or you get notified (for example via email), you'll need to take it down.

If we're not allowed to share the results of benchmarks and comparisons, the only action that comes to my mind is:

1) Never buy Intel again, if presented with a viable choice!

2) Prepare and share ready-made benchmarking live USBs/utilities, so people can see the horrors Intel has caused them without violating the license.

3) Dump benchmarking results online from countries, where the Delaware courts mentioned in the article has no jurisdiction upon.

4) Get every copy of this microcode license prepared for different countries, sue the license in each of them, and have Intel struggle with it.


5) Publish the benchmarks, dare Intel to sue. Make fun of them for publicity. May be a reasonable and cheap way to get some publicity for a startup. Obviously requires a deeper assessment due anyone looking into it...

NDAs are stupid for the sams reason. Your rich uncle who never signed it can go use the idea, while you remain poor let’s say. Who are they gonna go after?

Has anything even remotely similar to this ever been enforced anywhere? (Apart from the US)?

Once I have the application (or in this case, the microcode) it would seem the data I produce with it is mine to do as I please with? Otherwise it would be like microsoft saying that I couldn't publish any .docx files online that I produced with their software?


I would guess this is designed to Target tech news orgs who report benchmarks. Can't prevent every forum post; media is a bigger than. However, someone's going to go to a benchmarks database and look at new results compared with old ones and figure it out.

It would seem that paired with https://news.ycombinator.com/item?id=17820248 that Intel is reeling back like a wounded animal. I'm intrigued as to what comes next - real innovation or dirty tactics to stay on top?

Why not both?!

Worked wonders for Nvidia.

They better be wary of the Streisand Effect here. Banning benchmarking just makes it seem like the performance hit is going to be serious, which makes everyone even more curious.

I didn't know anything about this... now I'm really curious what the benchmarks are.

Now that it is news they're pretty much begging for someone to do the benchmarks.

This reminds me of the oracle license that was so broad it prevented users from talking to other users about their experience with the product... any experience.


And I don't think Intel has any way to prevent some blog in China or Russia to run some standardised benchmark tools on their box. Intel will not even achieve preventing these benchmarks popping up, it will just draw attention to them.

Perhaps they're not complete imbeciles and this is a marketing stunt, turns out the benchmarks are fantastic??

I would very much enjoy that.

Could be a "reverse" Streisand Effect.

Any slowdown would have been pretty public, the microcode update is still a big deal and the expectation weren't clear, I'm pretty sure the results were worst than the average expectation. Now though, the expectation are much higher, people expect it to be pretty high and maybe the results are now actually lower than theses expectations.

So yeah maybe more publicity but maybe now people will say "not as bad as I thought", instead of saying "much worse than I thought".


That's brilliant, but probably a little too clever to be the reality.

Aside from my vic20 and c64 I’ve only ever owned intel CPUs, and those two may have been intel as well, I wouldn’t know. I’ve never made a decision to chose Intel based on benchmark, I’ve bought them because they’ve always been great for me.

So it’ll be ironic when I buy an AMD processor when I upgrade for cyberpunk 2077, because of benchmarks. Not because AMD is faster, they may be but I wouldn’t know, no, it’ll be because intel are douches.

I didn’t like how they handled their vulnerabilities, or how they still released chips with the errors long after it was discovered because they had production planned, and now they are pulling stuff like this?

Heh.


> Aside from my vic20 and c64 I’ve only ever owned intel CPUs, and those two may have been intel as well, I wouldn’t know.

At that time, few home computers were based on Intel processors because they were too expensive. Commodore's early computers, like Atari's, were based on the MOS Technology 6502 (made by ex-Motorola engineers as a cheaper and easier-to-integrate alternative to the Motorola 6800; the corresponding Intel-killer was the Zilog Z80 notably used in various CP/M and MSX computers). Commodore eventually acquired the company, rebranding it Commodore Semiconductor Group. A CMOS version of the 6502 is still sold by the Western Design Center [1].

[1] http://www.westerndesigncenter.com/wdc/w65c02s-chip.cfm


One click-chain of curiosity later, and I learned that they sell a little educational SBC based on the 6502.

http://wdc65xx.com/boards/w65c02sxb-engineering-development-...

Pro: it looks like an AT motherboard from 1985

Con: it's $189 :s


They do have some cheaper ones ($29 & $99) problem is I can't quite figure out why I need one.

https://www.tindie.com/stores/dcwdc/


TIL.

I understand what you mean though. Architecture itself isn't _that_ critical; peripherals and batteries-included capabilities are.

In this case, when I read "for developing educational and industrial strength microcontrollers." in the product description I immediately concluded that the "educational" is quite possibly part of a product pitch to companies making 6502-based control equipment, along of course with educational institutions still using the 6502 as a teaching aid.

As for devkits I do want, the Forth GA144 is definitely up there near the top, and the KISS-68030 (https://www.retrobrewcomputers.org/doku.php?id=boards:ecb:ki...) is floating around in there as a very unsure "maybe".


Zilog was of course famously founded by ex-Intel people to provide a cheaper alternative to the 8080.

The UK's Sinclair ZX Spectrum and Amstrad CPC were famously Z80-based, as well as the MSX spec (as you've mentioned) which was huge in Japan. It was also used in a lot of arcade machines either as a main CPU or as a sound chip.


The Sinclair ZX80 (predecessor of the Spectrum) was Z80 based. I know, because it was my very first computer.

Not really ideal for word processing and somewhat limited with 1KB (yes as in one kilobyte) of RAM.

You could get a rather expensive 16KB memory extension. Alas ,this had the problem of disconecting every 30 minutes or so, due to thermal issues. So you had to be really sure to save your work to datasette (don't ask!) in very regular intervals.

Fun times!


Of course, this license is a virtual benchmark: Intel spares you the work of running tests with their implicit claim that their performance is worse than anyone's lowest expectations.

How can INTEL tell people what to do with their CPU, I own it so I think I can do with it what I want or not? Only because the microcode update is considered software? I wouldn't be scared to benchmark it and publish the results in the EU. I wonder if this is binding in the US.

It's kind of funny because people don't even know what the microcode does. They're demanding that we don't benchmark any software running on their processors. Well, they insisted performance was important for decades but now they don't want people to measure it. What that means should be obvious. They suck.

I've been on both Intel and AMD for my self-built desktops, originally a K6-II, then a Pentium III, then a Pentium 4, and since 2011 a Phenom II (first a dual-core 3.2GHz, upgraded to a 6-core 3.3GHz a couple of years ago).

Both have generally been good to me (the P4 was a bit crap), but I'm very likely to choose AMD again next time I do a full upgrade. Now that their GPUs are also working great in Linux on open source drivers, they're a much better choice than Nvidia for me, I just replaced my GTX 460 with an RX 560.

It's a shame Intel has the high-end laptop market locked up so tight. Good luck getting a modern Thinkpad with an AMD processor.


Similar to my story. Apart from an Athlon 650 around 2000-era we've always been an Intel household. I recently upgraded my gaming PC and went with the Ryzen 7 2700X, partly due to the way Intel have been treating their customers like idiots lately. I'm seriously considering AMD for the graphics card too (currently running a GTX 970) as I may well be moving my main gaming PC over to Linux in the near future.

I replaced my GTX 460 with an RX 560, no regrets at all. It uses the AMDGPU open source kernel driver and MESA on the userspace side, so it's supported outright on rolling release distros with no additional steps, and trivially easy to install on any recent normal release distro.

For Linux Mint 19 (based on Ubuntu 18.04 LTS), I had to add a PPA with the newest MESA drivers, other than that no issues at all.


> I'm seriously considering AMD for the graphics card too (currently running a GTX 970)

Me too. I was running Intel+Nvidia rigs for about ten years, up until last year, when I got a Ryzen 1800X and (this year) a Radeon 7970. Nvidia hasn't been the most ethically behaved as of late (whether it's more or less than Intel, I haven't figured out). Intel will be releasing a discrete GPU in a year or two; who knows how fast it will be, or if they will allow people to benchmark it.


Same, I had bad experiences with the early Athlons and bought Intels after that, never had any stability problems. But now I'm seeing douchey behaviour from both Intel and nVidia. I've been a big fanboy because of the performance, but the business practises are becoming polarising, and AMD seems to have caught up or surpassed in performance terms. Plus, they handled the vulnerabilities disclosure much better (even a smug 'AMD processors are not affected' on more than one).

I always used to root for Intel because AMD made their entire business model off copying (later licensing) Intel's x86 designs (including the model numbers), but that's becoming a lot less relevant now.

I upgraded my desktop's Geforce 550Ti with a Radeon HD 7850, and though it's gotten off to a slightly rocky start (Windows logins are noticeably slower, seems to be a known issue), performance is great, benchmarks showing it on par with the 970m in my laptop. When the Haswell i7 needs to be upgraded, I may start looking at a Ryzen. Never thought I'd see the day.


> AMD made their entire business model off copying (later licensing)

Intel contracted AMD to second source their processors as a requirement for being in the IBM PC (requirement dictated by IBM).

AMD existed long before their version of the x86, producing the venerable http://www.cpu-world.com/CPUs/2901/ which was widely used in minicomputers of the time.

The amazing thing about the 2901 and family, was that the engineer could design a board level CPU and full freedom over register sizes, numbers, alus, etc.

Don't forget that the 64 bit x86 ISA was created by AMD and licensed to Intel.


Huh, I didn't know that. I thought AMD started competing with Intel by cloning the 286 and beyond. Had no idea it was for the IBM PC contract.

I knew about 64-bit (Athlon 64, anyone?), but I've always heard x86-64 described as a kludge and an awful hack, not a well-designed architecture. The greater RAM accessibility and native execution of 32-bit code are advantages, but shortly thereafter Intel went multi-core, which seems to have done drastically more for system performance than x64 did.


I'm running an RX 580 4GB for the GPU on my gaming rig, and it handles anything I throw at it really well. They do, however, tend to run hotter than the NVidia chips.

Has Intel just outlawed any review website (gaming, enthusiast, etc) from ever posting benchmarks of their CPUs again? I feel like they didn't think this through.

And even if they did think it through. It should be aggressively ignored.

... or just the reverse: clearly omitted in all benchmarks, stating you buy at your own risk because of Intel's unwillingness to subject its products to examination.

[Sure it's not going to fly given Intel's market position, but it's a tempting thought anyway...]


Right, technically speaking, a review site now only has the choice of either ignoring Intels terms or no longer consider Intel processors in their benchmarking at all. As a reviewer , I would not benchmark unpatched products on a general basis.

If we tech journalists were actually seriously concerned that Intel would be dumb enough to try to enforce those provisions against us, then we'd just ask the motherboard vendors for updated pre-release firmware that incorporates the new microcode but doesn't come with Intel's license agreement attached. We often go that route anyways because it's easier to update the motherboard than to ensure that every OS you're testing against (especially Windows) has loaded the newer microcode.

Also if you get the update via windows update you get the patch without having ever agreed to the no-benchmarking clause, so you're free to publish anyway.

Windows Updates used to pop up an EULA to be agreed upon before installing certain anti-malware related software, not sure if that's the case anymore with Windows 10 auto-install-and-reboot procedure. I wonder if Intel could/would try to have Microsoft insert another click-to-agree EULA for this upcoming patch?

I got the Win10 update yesterday and didn't have to accept any sort of license first.

It's easier, you can post bar graphs where Intel processors are listed as "presumably very bad". Just to remind the public that they should expect the worst because Intel has something to hide.

Most (all?) gaming websites also recommend the minimum/best system for each game. With AMD recent bumps in performance and after this bully move, Intel is in serious risk of disappearing from the list of endorsed solutions.

I can think of two theories:

1. It's a mistake. Someone in legal got carried away.

2. The performance of the L1TF mitigation is so awful that someone at Intel thought it would be a good idea to try to keep the performance secret.

(Which leads to option 2b. The performance of the L1TF mitigation is so awful that somemone at Intel is afraid that Intel could be sued as a result, and they want to mitigate that risk.)

I would guess it's #1.


Normally I'd give people the benefit of the doubt, but in this case I think Intel has already shown that they have no credibility in anything anymore. They only ever spin everything, the last time anyone said something truthful (former CEO mentioned they needed to try to limit AMD to 20% server share, up from less than 1%), he was not only fired but publicly humiliated.

I think it's #1 but this legal change has been more than 2 weeks and there's no response from Intel in this problem. I don't understand why Intel is screwing everything lately.

The wording sounds like it bans any and all benchmarks. How would groups like Linus or Dave2D even operate? It honestly doesn't seem remotely enforceable.

It works for Oracle (it is famously illegal to publish benchmarks of DB2 vs other engines), I'm sure intel can make it work for them thanks to Oracle's court case(s).

I'm pretty sure DB2 is an IBM thing.

You're right, it is, I had the wrong DB name.

DB2 is IBM. The court case you were on about was Oracle's RDBMS called Oracle RDBMS. However I believe DB2 might have been used as one of the comparisons against Oracle.

Anyhow, specifics aside, you do make a good point.


oops, yeah it would help if I got the names right.

I believe you are referring to the DeWitt clause.

So is the below technically illegal?

http://phpdao.com/mysql_postgres_oracle_mssql/


I think OP is using the wrong term - breaching terms of contract is not illegal in itself. It just means Oracle will not do any more business with you.

> Oracle will not do any more business with you

Isn't that a blessing?


makes sense. so unless they dedicated the resources to figuring out who did it, etc. it's reasonably moot.

So what you're saying is that sysadmins should all benchmark Oracle products to prevent their employers from being trapped in Oracle land?

Breaching terms of contract is most definitely illegal and Oracle can sue any time they want. It's not like trademark law, where you _have_ to sue even if you don't want to; Oracle can ignore anything that comes out not looking bad for them, which still being 100% able to sue the pants off of anyone that goes "hey look this product is worse across the board in real world comparisons to MySQL and MariaDB".

They can sue but it's not illegal, and they very well may not be able to get any money from you for truthful statements.

Technically yes, but Oracle won't sue as the results are favorable for them in this case.

Yeah that was my first thought, too. They are not even losing in this ;)

But you can't buy an oracle license in a shop around the corner. It is going to be hard for intel to enforce it.

Perhaps if Intel were to play honestly then you might have a point. However there's certainly a few ways they could attempt to enforce it through slightly underhanded, yet pretty typical practices for how many multi-nationals like to operate in this day and age.

* cease and desist orders. They could probably argue improper use of trademark or something. And by "could" I don't mean "they have a legitimate legal case" but rather "a flimsy one but one that is still scary enough that few people might want to take the risk / expense testing the argument in court.

* many benchmarks are ran by reviewers who might have access components before they hit the shelves. It would be trivial for Intel to end that relationship. If it's suppliers further down the chain who are providing samples to journalists and reviewers then Intel might put pressure on those suppliers to end their relationships with said journalists. This might break a few anti-competition laws in the EU but it's not like that's ever stopped businesses in the past.

On a tangential rant: I think the real issue isn't so much whether it is enforceable but rather the simple fact that companies are even allowed to muddy the waters about what basic journalistic and/or consumer rights we have. I'm getting rather fed up of some multi-nationals behaving like they're above the law.


I literally got one with a book on databases as a student, so you kind of can? But the power of the argument is that if it's possible to put this kind of clause in place and enforce it by law for something that's harder to get, then Intel putting that clause in place for the general public has legal prescedent when they do want to take someone to court.

It actually makes it easier for Intel to argue that chips are such specialized bits of equipment that even though your average joe can _get_ one, they won't understand how it works and so only highly trainer professionals who have been certified by Intel would be able to reliably benchmarks their products. "As the average user would interpret the results incorrectly, their publication would hurt intel's bottom line". And suddenly you're 95% of the way to having won the case already.


You don't need to buy anything, just download from their website.

A hotshot lawyer could sue them for this anticompetitive clause for big $$$.

(it disallows comparing the product against the competitors)


Sue Oracle.

Yeah, my guess would be it's #1 too; I can't believe that Intel can be so naive that no one on the internet will benchmark the performance—given how all these nasty CVEs are raising such a stink. Serious customers will demand performance numbers; you can't simply answer them with "blocked due to legal".

I am not a lawyer but I question whether this is enforceable in the USA and I do not question, rather state this is not enforceable in the EU. All click-though / shrink-wrap licensing the end user is forced to or has to accept automatically is invalid.

Regardless of the details of enforceability, this just sends the wrong message to the rest of the industry and community, and doesn't inspire confidence in Intel.

So, Intel indeed redacted the "no benchmarks" thing: https://news.ycombinator.com/item?id=17833777

If you consider how fundamentally broken speculative execution from a security standpoint is, then a fully "fixed" processor will be significantly slower than unfixed ones. The benchmarking clause rather clearly shows this.

A corporation the size of Intel doesn't do this sort of thing accidentally, especially as there was no good reason for them to be modifying the terms in the first place.

I'd say that it is related to L1TF, but not to keep it secret. It's additional ammunition to use in court when they get sued for performance loss.

> Cloud Company: Your honor, the security flaws in the hardware and microcode provided by the defendant necessitated the installation of updates, also provided by defendant, which resulted in a 30% loss of overall performance. Since our business model is predicated on selling the processing power of computers that have CPUs manufactured by defendant installed, they are liable for this loss in productivity.

> Intel: Your honor, plaintiff could not possibly prove any loss in productivity. If you'll examine Exhibit A, the Intel microcode EULA, you will see that it expressly prohibits benchmarking. Whether plaintiff is claiming they did these benchmarks themselves or a third party did them is immaterial because our license expressly forbids doing so. Plaintiffs need to show a loss of productivity without relying on performance benchmarks and therefore need to show that the workloads prior to and after the microcode has been installed are equivalent and that the results are detrimental.

Now, I don't expect most judges would go for it since EULAs are notoriously weak, but it does give them ammunition to impugn the evidence. It's always possible a judge or jury would listen to that.


"could not possibly prove" [without violating the license]. That doesn't invalidate the Cloud Company's claim, it only offers Intel an opportunity to countersue. In that case, they can likely only sue for copyright infringement damages.

What are the teeth on an admitted-in-court EULA violation?

Looking at the license, the only thing it grants the end-user and which Intel could revoke is the license to use the software:

> Subject to the terms and conditions of this Agreement, Intel hereby grants to you a non-exclusive, non- transferable right to use the Software (for the purpose of this Agreement, to use the Software includes to download, install, and access the Software) listed in the Grant Letter solely for your own internal business operations. You are not granted rights to Updates and Upgrades unless you have purchased Support (or a service subscription granting rights to Updates and Upgrades).

So yeah, they could revoke your license to the update and leave you with a lot of insecure silicon...but that's what you had before the update (and, in all likelihood, what you still have after the update, just with a known flaw patched and the system lots slower). I don't even think they could sue you for copyright infringement, because you're not violating copyright, you're violating the EULA.


I didn't say it was particularly strong. The point is that it's something. Intel could claim that it's there because benchmark software impact performance itself, and so they're explicitly making no claim of warranty when it's present. The point isn't that it's going to stick, just that it's more stuff to muck up the the legal process with to delay judgement.

Weighed against the normal risks of someone actually reading the EULA, it seems like a minor thing that may help in some way.


After it blows up I'm sure they'll claim it's the former, regardless of the latter.

In 2014 I would certainly assume (1) as well but Intel has been getting less and less open and also more and more willing to creatively stretch the truth in their marketing over the last few years in a worrying way. Currently I think (2) is about equally as likely as (1).

The author updated the post. I'm guessing #1 (I work at Intel). Text and link below.

UPDATE: Intel has resolved their microcode licensing issue which I complained about in this blog post. The new license text is here (https://01.org/mcu-path-license-2018).


I'm going with option #3. Intel sold me a warranted part without these terms. Now they're trying to alter an existing sale after the fact because they need to fix a defect under that warranty. I will do as I please with my property and they can talk to my attorneys.

As a side note: Some of the license changes also block Debian from updating their intel-microcode package[1].

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=906158#14


This seems like it would be a big deal considering this whole thing is related to servers and I have to imagine some server operators are running Debian? Maybe at the bare metal level their all running RHEL, which I presume doesn't care about the license restrictions.

From the article linked in TFA[1], Debian appears to be the only distribution that is refusing to release it. Gentoo has made it so that you have to agree to the new license when upgrading your intel-ucode package, while other distribution vendors (Arch, Red Hat, and SUSE) all appear to be shipping it without issue.

The argument from Intel is that the new changes don't actually affect distributions, as distributions are given the right to redistribute the microcode in the license (and this is separate to allowing third parties to publish benchmarks). Either that, or the lawyers at Red Hat and SUSE missed this somehow (though this is unlikely -- at SUSE all package updates go through a semi-automated legal review internally, similar to how openSUSE review works).

I do understand Debian's ethical issue with this though, and I applaud them for standing up for their users (though unfortunately it does leave their users vulnerable -- I imagine it was a difficult decision to make.)

[I work at SUSE, though I don't have any hands-on knowledge about how this update was being handled internally. I mostly work on container runtimes.]

[1]: https://www.theregister.co.uk/2018/08/21/intel_cpu_patch_lic...


> Gentoo has made it so that you have to agree to the new license when upgrading your intel-ucode package

As well as not redistributing it to mirrors, as they are unable to ask mirrors to accept a new license. [0]

[0] https://bugs.gentoo.org/664134#c2


My guess is Intel will revert the license change soon. It's just too absurd to stay. But if not, I wonder if distros could have two packages, named with appropriate and well-deserved passive-aggressiveness, e.g.: intel-microcode-insecure and intel-microcode-legally-restricted.

And maybe they'll mark benchmarking packages as conflicting with intel-microcode-legally-restricted?

Installing the benchmarking package, or running it, isn't against the license. Providing or publishing (comparative?) numbers while the microcode package is applied would be.

There is nothing passive-aggressive in intel-microcode-legally-restricted, it's just a statement of fact.

It's not quite that clear cut IMO.

E.g. "intel-microcode-legacy" and "intel-microcode" would be more diplomatic (disgustingly so IMO).

And one could argue that every package in non-free would deserve a "-legally-restricted" suffix.



Debian has a dedicated archive, named "non-free", for everything that does not comply with the DFSG.

The DFSG is a set of guidelines that define what can be considered true FLOSS.

The reason is to protect users from legal risks.

https://www.debian.org/social_contract#guidelines

https://www.debian.org/doc/debian-policy/ch-archive.html


Would be cool if this was blown out by the courts.

This is beyond idiotic. Any company that's about to release this in their production server will want to benchmark the effect of the fix. And some of them will have to provide the results to their customers.

Oh intel. What has happened to you? Do they really think this could even remotely be enforceable?

Oracle and MS SQL Server have had no benchmark disclosures in their licenses for a long time, so Intel probably thinks so.


I don't think it's illegal for the author to violate the terms of the license that they agreed to in order to be able to run Oracle RDBMS legally.

Feels like it makes very little sense to even attempt to suppress benchmarking. Like, comparing products is at the heart of capitalist / free market trade / society, surely?

> (iii) use or make the Software available for the use or benefit of third parties

i.e. you cannot allow anyone else to use your CPU?

I get that the intent is probably about distribution, but the software runs on the CPU so is being used by whoever is using the CPU.


Forget benchmarks, how is (iii) (“You will not [...] use or make the Software available for the use or benefit of third parties”) compatible with shared hosting and rented virtual machines, where the provider has to apply the microcode for the benefit of the guests?

Forget shared hosts, how about when I write something for my client on my employers machine? I'm using the software for the benefit of a third party...

I read that license line as that you can't make it available for the benefit of others, not that you can't use it for the benefit of others. But of course that's the logical interpretation and the only one that matters is the worst possible interpretation. I'm not a native English speaker nor lawyer, would be interesting to see another opinion on this.

This TOS is driven by lawyers, not by business decision makers

Are lawyers running their business, or are business decision makers making business decisions?

Lawyers, I might guess, if this TOS is anything to go by.

So business decision makers made the decision to let their lawyers be business decision makers?

It certainly looks like that. Not sure it was the wisest move though...

Bring on the lawsuits. Ignore the patches and sue Intel for the underlying security flaws. When they point to the patches, clearly state that because of the new license, they do not solve the problem and will not be applied. No one signed up for this when they bought an Intel CPU and that's saying a lot considering all the bullshit we do sign up for when buying one. This is outrageous. Intel should be sued in a class action. Whether they could have known about the exploits or not is irrelevant at this point. They refuse to provide proper patches without this license which is equivalent to providing no patches at all while knowing about the exploit. They have provided faulty hardware without a way to fix it. A lawsuit or a few thousand is the only way to get resolution on this.

How many mandatory binding arbitration clauses and/or class action waivers have Intel hidden in their license agreements over the years? If they did it right, they never have to worry about a class action lawsuit (and the horrible press that comes with it) anytime soon.

Which, of course, makes the kind of systematic deception Intel is trying to pull off here much easier. It's a feature!


I could be wrong but I think binding arbitration is almost exclusively an American thing. The rest of the world can sue.

I wasn't aware that CPUs have licenses. It's certainly not something I've ever agreed to on purpose or otherwise. There is no box or manual to imply that I have ever agreed to such a license. I'm not doubting you, I'm just wondering where this license lies and when/how did I agree to it?

I agree but is it possible this is aimed at avoiding class action suits about false advertised performamce, by attempting to make the new benchmarks inadmissible?

Is it even legal to do so? Tomorrow Intel can ask us not to benchmark the clock speed.

Sorry if I stress this even one more time, but we badly need 100% open iron, I mean something beefier than SiFive. If there is any effort in this direction, then, say for a year, most donations should be diverted over there. Closed hardware is becoming the unavoidable medium used to push closed firmware into everyone's system, that's a lot more important than benchmarks.

I doubt RISC V is going to solve the problem you're complaining about, even if it becomes "beefier." RISC V is going to be a launchpad for proprietary custom accelerators, which is great, but will still involve pushing closed firmware.

Have you seen https://www.raptorcs.com/TALOSII/? Typing this on my own right now.

Fails the 'reasonable pricing' check. Who's gonna pay 6x of what you pay for a comparable intel system? This is only for the super-enthusiast.

Technology often starts out expensive and enthusiast only, but if Intel keeps throwing these wild punches in every direction, why wouldn't we see a steady migration to alternatives? AMD is definitely benefiting from Intel's recent decisions, I am sure Raptor is also starting to get more business. A few more years of shady Intel decisions combined with sufficient negative press and I am sure the market will gladly decide. There are surely plenty of large corporations who can afford the costs and would prefer being able to audit everything themselves. This seems like the perfect storm for competition to finally have a chance.

You cannot deliver competitive quality, performance and pricing right away, completely agree. But there's generally a threshold to how much more expensive it can be in the beginning, and more importantly, how much you benefit from it. The problem here is that only few people even understand the problems with what Intel does, and even fewer people care. My games still run fine so why should I care? HN is a real echo chamber in cases like this and easily give a false impression. As you also mentioned, AMD is the only realistic winner from all this, since it's also x86 so you can switch over and everything stays the same. But in general, remember that people are very good at forgetting, and at just accepting that "everything's fucked up" is the new normal. Intel only needs to keep delivering competitive performance and people will keep buying.

I'd really love to see a big shift to a new architecture, RISC V, OpenPOWER etc. get me really excited because as a nerd new technology in general is always interesting, but the above simply is my prediction of the future based on history. I mean, how long ago have the first serious Intel ME vulnerabilities been disclosed? What happened apart from a short shitstorm on tech websites and some small companies offering laptops with crippled down IME that only complete neck beards care about?


But riscV processors will always be closed source. so you cannot crowdsource vulnerability finding...

Why would they always be closed source?

RiscV cores are not "open" iron. The only thing open is the instruction set. A pdf you download from the web. You have to pay $ to be in the governing body that adds new instructions. Not really an open source kind of thing..

> Another issue is whether the customer should install the fix at all. Many computer users don’t allow outside or unprivileged users to run on their CPUs the way a cloud or hosting company does. For them, these side-channel and timing attacks are mostly irrelevant, and the slowdown incurred by installing the fix is unnecessary.

lol, javascript


but the sandboxes, think of the inescapable sandboxes!

"Many customers" meaning people and orgs running server software on direct hardware. Does your caching or database server run user provided code? Is it accessible to the outside in any way? If not, then maybe it doesn't need the patch.

The article says "Many computer users" not "Many customers". Furthermore when the article mentions "customers" elsewhere we can probably assume it means "Intel customers" which is a much greater subset than the group you're talking about, and which would be probably less than 1% as numerous as the "Many computer users" that run javascript.

lol, javascript


> which would be probably less than 1% as numerous as the "Many computer users" that run javascript.

So? Many does not mean most. There's a choice to be made, apply the security patch and accept the performance loss, or don't. Some people may not need to to remain close to as safe as they were previously based on their configuration for some of their systems, and I would guess the number of systems easily numbers in the millions. This is a benefit for those people, and worth mentioning, even if it's not nearly a large a number as the total number of CPUs or customers.


In this case, many does mean most.


Are you trying to imply that because somehow ASLR can't protect you and they show and example in the context of Javascript that somehow that means your Postgres server/service is immediately at risk?

At least, that's all I can think you are trying to imply, because you didn't actually say anything, you just dropped a link. It's hard to have a useful conversation when that happens.


Have timing attacks been done successfully in JS? I imagine it's much harder since you have much less low-level control and the engine might impose too much noise. However, wasm is a different story.

web search "spy in the sandbox"

Yes, Someone did an ASLR bypass in JavaScript, and a key component was being able to measure time accurately.

https://www.cs.vu.nl/~herbertb/download/papers/anc_ndss17.pd...


I think browsers currently have to limit the precision of their built-in timer APIs and they had to kill shared mutable state support for their thread APIs so attackers couldn't implement their own.

>Another issue is whether the customer should install the fix at all

Microsoft will surely decide for me on my Windows 10 gaming PC. Better save my work (which I sometimes do even on a gaming machine) frequently lest the masters deem it fit to restart while I'm away having lunch if they decide I can live with the performance hit.


It sounds a lot like you are blaming someone else for not having control of your computer.

Just mark every 2nd Tuesday of a month as patch day and you won't have surprises!

• Updates are not served to me at regular, known intervals. Perhaps it's due to a progressive rollout policy of some kind.

• Certain days come up that I, the paying user, do not want to patch on. Microsoft wins this disagreement and I lose. This occurs in a glib fashion with a message like "Hey, just a heads up, we are going to restart your computer" (whether I like it or not). It is my computer, there is no "we!"

It will absolutely close programs with unsaved work if I am not there.

• Maybe I don't want the performance hit on my gaming PC. This is another element of surprise: who knows how bad it'll be? Certainly not the users if Intel and Microsoft have their way.

• Yet another element of surprise: I've had hardware stop working after updates.

I cannot wait until it becomes viable to escape the toxicity of companies such as Intel and Microsoft.


You are best placed to make that assessment on your own needs. Know that many of us have done it and it turned out to be easier than we thought it would be. Good luck with the analysis!

> I cannot wait until it becomes viable to escape the toxicity of companies such as Intel and Microsoft.

This has been "viable" for a long time now. Ignore the FUD and just try it for yourself. You might be pleasantly surprised.


Especially now with Valve's Proton.

I've been watching this very closely. I think this project will finally tip the scales and make it viable for me (I've had problems with many games when I attempted to switch in the past).

You probably saw this, but this is a google doc of what games people have tried in proton.

https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmF...

i'm pumped to try monster hunter. that game was the only reason I was going to use windows, so hope it runs well


It has actually been perfectly viable to escape Intel and (especially) Microsoft for a very long time. Lots of people on HN have done so.

Buy an AMD machine, install Ubuntu, and you're done.


>gaming PC

>Ubuntu

Hope he likes Mahjong and Tux Racer


Have you not heard of Valve's Proton? It's the next best thing!

yeah.... OpenGL is so far from Dx in terms of performance that it is laughable.

It has potential, but when I want to unwind and fire up a game, the last thing I want to do is deal with the rigmarole of alpha software

WINE + DXVK can pretty much run any modern game (with varying performance).

Microsoft's stance on the issue:

Win10 is free* (so it does this) - Professionals who need to override these settings should get Win10 Pro.


I have Win 10 Pro. AFAIK it handles updates the exact same way. How do I disable automatic reboots on my Win 10 Pro machine?

Since when is Win 10 free? The Microsoft store seems to indicate $139.

> and you won't have surprises

They seem to be getting less aggressive, but I've had my machine reboot with less than half an hour of notice. No notice before leaving the machine, return 30 minute later to find it rebooted. So "just mark every 2nd Tuesday of a month as patch day and never leave your machine(s) unattended on that day and you won't have surprises!".

I've also had (in a recent month) them reboot overnight, aborting long running processes that I had to clean up and restart, after explicitly checking for updates before starting those processes (none being found), and making sure my "active hours" were set such that it shouldn't restart.

Earlier this year I had a patch mid-cycle (not close to the normal patch Tuesday) that caused a reboot too. IIRC it was a flash-for-edge/ie update. An update for a feature I don't use/want and can't remove caused a "random" reboot at an unpredictable time. Ta muchly...

I understand MS wanting to get away from their desktop OS's insecure reputation which is in part caused by people never installing updates meaning that worms and other malware were able to run rampant relatively unchecked, but they seem rather wrong headed about how pushy to be in fixing that.


There had been many times I've started in the morning, expecting to work on the results of whatever process I left to run, only to be greeted with an empty desktop.

When I review the update settings, they're as negative as I can make them.

I shouldn't complain too much, sometimes it kindly starts Visual Studio for me, saving me 20 seconds.

It could have asked, or even warned that I'd be wasting my time setting up the nightly job, only for it to be cancelled midway through for little reason.

Basically, it's just a consumer OS, and shouldn't be used for actual work.


My favorite story about that: I worked in furnace optimization, giant electric arc furnaces that use half the electrical capacity of a small town generation plant. These things use 'pencil leads' the width of a child, meters long, that strike an electric spark an inch or two in diameter and 12 inches long, from the probe to the pile of recycled steel in the crucible. Melt it for new steel.

Anyway we had a PC running an optimization on the arc length, probe distance etc to get the best instantaneous coupling, trying to save electricity by putting the energy into the steel (to melt it) instead of burning up the probe, creating heat etc.

One morning the plant called, had run overnight but sometime in the late night the thing stopped adjusting, ran 20 minutes before they noticed without moving the probe or adjusting the current. Just sat there burning up carbon and making heat.

You can see where this was going. Investigated, and the PC had gone into 'power save' mode, suspended the app, gone dark. Leaving the probe at whatever setting it was at.

I estimated at the time, that the power wasted during those 20 minutes was more than all the electricity ever saved worldwide by Windows 'power save' feature. Yes, it would have been better had Microsoft Never Invented Power Save, than to have had that incident.

You can say it was our fault, guy installing didn't remember to disable power save. But still, people do things like that. I prefer to blame the computer.


I like those stories.

Although in this case, it's possible to disable power saving.

(or have they stopped that, too?)


I love this story and it underscores something I've come to believe very strongly as an engineer:

Complexity is evil.

The benefits of adding complexity rarely outweigh the downsides, especially when the "embodied energy" of both creating and maintaining that complexity and the follow-on complexity it adds to other systems is considered. Making things a lot more complex to save energy is usually a wash (or worse) because of the energy you spend creating and maintaining complexity.

Complexity usually creeps into systems as a result of piecemeal solutions to problems, marketing driven feature-itis, or attempts by engineers to be overly clever and show off. The latter is incredibly common in IT and programming.

http://www.ariel.com.au/jokes/The_Evolution_of_a_Programmer....

Complexity is evil for efficiency, but it's even more evil for security. The number of states a system can enter is an exponential function of the number of variables and linkages in a system. More complex systems are just exponentially more likely to have vulnerabilities for unavoidable combinatorial reasons. The motive for complexity addition doesn't matter, meaning that complexity increases to mitigate security can easily backfire. Furthermore as systems become more complex they become too big to be analyzed, making it much more likely that major security issues are hiding in plain sight and waiting to be discovered. Black hats always have the advantage here because when you attack a system you get instant feedback about whether your attack worked, while security auditing offers no feedback as to whether or not you've closed all the holes.

Complexity tends to accumulate until systems collapse. Getting rid of it is very hard, since users/customers start depending on every feature and nothing can be removed without breaking something.

The x64 architecture seems to be teetering on the brink. AMD's architecture seems better but we'll see.


This doesn't work. They can hit on other days too- with no warning.

Maybe best to disconnect from the internet when not using- putting to sleep doesn't do it.


My opinion of Bruce Perens just decreased markedly.

Hopefully he will edit this blog post with better advice to the "casual" computer user.


Before Zen, we all kind of assumed they were so far ahead that AMD were more likely to be out of business before they would ever be a credible threat again.

I actually thought Intel must have had some tricks up their sleeves in terms of performance gains that we hadn't seen yet, simply because there was no market need to roll them out and they had so many years of coasting on marginal gains.

Seeing them taking this stance looks a lot like the microcode hit is that bad, and that the emperor has no clothes.

Clearly, they don't have an answer to AMD at all. If this is true, their shareholders should be asking serious questions about why they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.


>they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.

Big companies rarely innovate without competition around.


This is certainly a myth. Everyone knows, including big companies, that as you grow in size you have a harder time innovating. The classic "lean startup" book mentions this issue and gives the example of Intuit's in-house accelerator/incubator. No way Intel doesn't know about it too.

https://www.forbes.com/sites/forbestechcouncil/2018/04/19/in...


What myth?

As you say, bigger companies have a harder time innovting.


Yeah, but it's not necessarily related to competion, size is primary factor. At least I believe that was his point.

I think it is related to competition. Lack of competition brings complacency of executives, who think they can get free money for shareholders without having to reinvest into innovation. (And I think that was carlmr's point.)

Of course, if you are bigger, the more likely it is that there is no competition at certain times.


Yeah, my point was that a big company can coast without innovating. They usually have a lot of stuff to sell that they have done in the past. The pressure to rejuvenate comes from outside, rarely from the inside (surely there are exceptions as with every good rule).

If you can survive without investing, a lot of companies choose not to.


Apple / Google / Amazon don’t seem to struggle?

Microsoft / IBM and now Intel defintely did.

I think it has more to do with the leadership of the company than the size of the company.


Does Google innovate?

-ss


Have you seen how many messaging apps theyve made in the last few years?

And it's still a mess that nobody wants to use.

Took over 500 lightbulbs before there was a usable one. Just dont give up.

I wish they would fix up one of the existing ones rather than keep shoving out new half finished ones. I poked around in a few of them from a web client and phone perspective for a while and still don't feel like they beat Slack.

This is sarcasm right? Messaging apps are the height of "innovation?"

:-)

wasn't bell labs part of a very big corporation when they invented basically everything about the modern world?

Bell Labs invented Google, Facebook, etc? Wow, did not know that.

But as you say, the labs were a very small, independent part of a very big corporation, which is likely why they did interesting things. The other thing is that they were forced by the government to license transistor patents to others, who actually created almost everything else.

By your theory the military invented the internet, so they invented the modern world. A huge and wasteful military is the key!


Google and Facebook would look a lot different without transistors.

And lasers. And Unix. And switched networking. And binary digital computers. And long-haul undersea cables. And the first successful communications satellite. And data networking.

http://blog.tmcnet.com/next-generation-communications/2011/0...


Yes, but apart from that, what have the Romans, er, Bell Labs ever given us?

They didn't invent Google and Facebook, but they're responsible for transistors, lasers, Unix, C and C++, CCD digital image sensors, long distance microwave radio relays, the first transatlantic phone cable, MOSFETs, communication satellites, and cell networks.

Not modern internet services no, but all of the infrastructure it's built on is grounded on theirs.


All of which they were required to license on FRAND terms - Consider that the basic research done by bell labs was paid for by a royalty paid to it by western electric and the various bell operating companies - it was blanket royalty, not a per product one too.

Google and Facebook are ad companies. Their “innovation” isn’t even in the same ballpark as what came out of Bell Labs

TBH your examples are not what people should be much proud of. If you mentioned e.g. stackexchange and similar human-grade tech, then this argument would look somewhat better. Invasive tracking and manipulation through ads is actually the style of military of thinking.

>Bell Labs invented Google, Facebook, etc? Wow, did not know that.

Compared to what they invented, Google and Facebook are insignificant in comparison.

We could go back to Altavista and no-Facebook and we'd be more or less fine.

Giving back the Bell Labs technlogy would be a much harder hit...


No-Facebook would arguably be a better world the older I get...

AT&T's early dominance was the direct result of the invention of the transistor that came from their own labs. From then on, the company was committed to spending massive amounts of money into developing new technology. Even if AT&T didn't directly invent a new technology, they were still one of the few companies in the world that could actually afford to buy new toys and put them into the hands of researchers who would find novel uses for it.

The Computerphile YouTube channel as some interviews with computer scientists that worked at Bell Labs during their heyday. It's incredibly interesting to hear about how they screwed around with early Linotype printers (which $100k+ in the 70s) and did things like reverse-engineer the font formats to create custom chess fonts.


That's not accurate. AT&T dominated telecommunications for decades before the transistor. Many of Bell Labs great scientific inventions predate the transistor, such as information theory.

Sure, but the transistor ushered in the long-distance era and helped AT&T establish their monopoly (and their eventual breakup). Regional telephone services were already beginning to drive down costs in local markets.

Maybe you're thinking of the triode, which AT&T did have a monopoly on around 1915, when they deployed the first transcontinental long-distance service using vacuum tubes to boost the signal. They bought deForest's patents and filed many more of their own on amplification circuits.

They had direct-dial long-distance by 1951, using relays and tubes.

The transistor started making a difference in telephony with the release of the 1ESS switch in 1965. But transistors were a commodity by then.


I think it is also that with a huge monopoly in an area, you can capture most of the benefits from basic research. With lots of competitors basic research gains go to all the competitors too and at some level you don't capture enough of the benefit to justify it.

Add to that, in return for their monopoly status, they released much of their IP to the public domaian. Imagine if they had held/sold their IP how electronic innovation would have been stifled.

No, Bell Labs did not invent basically everything about the modern world.

yeah, part of it was invented at PARC

you down voters are not giving Al Gore enough credit.

This is also the topic of the book "The Innovator's Dilemma" by Clayton Christensen.

I don't think this is necessarily true for big hardware companies.

If you're selling a subscription to a best-in-market service, there's not a lot of motive to innovate, agreed. Maybe you'd try a new product or a premium variant, but there's no reason to sink effort into advances that won't let you expand userbase or raise prices.

But for Intel? Before AMD got going, Intel's biggest competition was itself 9/18 months previously. Innovation wasn't just for new devices and new buyers, it was how they sold updated processors for machines that were already Intel Inside. They're not waiting on processors to die, they're actively pushing for them to be replaced for performance reasons.

That might create an incentive to release 'good enough' updates and dole out big improvements gradually, but in practice any of that which happened was already ending. Intel appears to be up against the wall regarding 10nm even with a competitor, and has been attempting major innovations to handle 7nm and below for years. With a revenue stream that relies on annual improvements to their product, they seem to have been learning hard into innovation and struggling, rather than waiting for a competitor.


It is really a nice point about competing with themselves. The volume of Intel is rather big that they have to sell to the same customer each 2-3 years even if CPUs are in principle can last 10 years or more. So they must increase the performance substantially to justify the sales.

This is a great argument to be against the recent trends of companies moving to subscription models just for the heck of it.

Definitely.

The common complaint about subscription models is obviously good: if the company folds you have nothing, instead of an unsupported product. But it neglects the other issue, which is that companies intentionally cut off the possibility of "good enough" to guarantee revenue.

I don't think its an accident that products like Microsoft Office went to subscriptions around the time it becomes very hard to imagine a new feature actually worth buying for.


The only problem with Intel is the price point. Given that the R&D is sunk, and the COGs are about 8x, their profitability is entirely market demand.

> Clearly, they don't have an answer to AMD at all

They still have a huge opportunity for CPU+FPGA, they bought Altera for the purpose.


But CPU+FPGA will forever be a tiny niche, right? I don't see any path where mainstream programs get a boost from FPGA.

That depends what you mean by mainstream. These days the cloud data centre is a mainstream market for hardware vendors. In that environment FPGAs can make sense. They might have advantages in throughput, latency, power consumption, security (no spectre), reliability (no software updates). For example, Microsoft use them for Azure virtual networking, machine learning and something inside Bing. You can imagine a world where every blade server has an FPGA. In fact, you can imagine a world where many blade servers have an FPGA and only a small support CPU.

You can spin up an FPGA-accelerated EC2 instance right now. There are a few highly specialised applications where every scrap of performance matters, but for the most part the software development costs are prohibitive compared to just spinning up more CPU or GPU instances.

https://aws.amazon.com/ec2/instance-types/f1/



The software needed to drive them needs to become much better (documented, ergonomics, everything) before that will be a realistic option.

As far as know there are zero applications on the client or desktop side that can take advantage of FPGA.


I can see there being a market for that. The reason nobody uses FPGAs at the moment is because nobody has them outside of specialized applications.

If Intel can release a CPU with a built-in FPGA and everyone has one, software developers will take advantage of them. I can see stuff like video editing programs, compression algorithms, etc taking advantage of that.


The other thing is that they're a nightmare to programme and they're expensive!

Yes, but speedup can be huge. For example, for NFA processing, using (a couple of) 2W FPGA against 200W GPU "GPUs underperform FPGA by a factor ~80-1000x across datasets" http://people.cs.vt.edu/~xdyu/html/ics17.pdf

The state transition table is encoded in logic for FPGA and as global memory table for GPU. Why didn't they try to encode state transitions as code on GPU?

Also, did they try to use enhanced locality introduced by processing several streams on GPU? E.g., if you keep states sorted as for tuple (state id, stream id) for all your streams, you may get more memory-controller friendly access pattern. I haven't seen mentions of that technique (which MUST be considered after Big Hero 6 [1] - they used that technique to never miss caches in whole movie rendering process). Big Hero 6 is 2014, the paper is 2017.

[1] https://en.wikipedia.org/wiki/Big_Hero_6_(film)

I really do not like papers like one linked by you. One system gets all of the treatment while other ones get... whatever is left.

I guess have they tried to use these techniques for GPUs, they would get performance gap that is much less than reported.


Today they are. Programming can be made easier with large frameworks. Cost can be reduced with higher volume.

Looking forward to that, without 'you need to buy Skylake CPU that is just as fast as the one you had before to get H265 decoding'. Good riddance.

That's nonsense. PCI/PCIe FPGA accelerator boards have been around for ages, mostly used by the finance traders and for machine learning/math-intensive computations before cheap GPUs with much simpler programming models (no need to write e.g. matrix multiplication in VHDL/Verilog anymore) like the various programmable shaders, CUDA and OpenCL have pretty much relegated them into obscurity.

FPGA is great if you need to talk to some hardware very fast/on many pins. E.g. something like a network router where you are shuffling packets between many high speed interfaces. Or doing a lot of measurements/interfacing a bunch of high speed sensors.

But not for general purpose software - GPUs are both faster, easier to develop for (and with good tooling) and much cheaper for doing that today.


Not at all. Specialized coprocessors are mainstream actually - from network cards offloading to small chip in every iPhone, and for a reason.

The obvious way forward is universal specialized coprocessors, reprogrammable for the task(s). Better if tightly integrated with the memory, buses and CPUs.

The weak side of FPGA historically is programmability and especially the tools. But since the interest for FPGA is growing exponentionally in open-source community in recent years, things may change.

And by the way, 10 years ago you would say that exactly 'niche' words about GPU.


How is every consumer device with a display "niche"?

Dedicated GPUs were not required historically to drive displays, and this was done by the CPU instead.

Similarly, we’re finding more functions we can take away from the CPU and migrate to dedicated circuitry (FPGAs) that can handle those tasks more efficiently than the CPU can.


dedicated circuitry != FPGAs.

GPU's avoid the overhead of FPGAs while still retaining a lot of flexibility.


Because they were all 2D. And the idea that GPUs would be used for mobile computing was not obvious.

Maybe 25 years ago. 10 years ago they were commonplace. Intel shipped integrated 3d graphics by no later than 1999:

https://en.wikipedia.org/wiki/Intel_810


Okay, I should have clarify the 'niche' thing 10 years ago was not GPU per se, but of course GPGPU, the GPU as a specialized coprocessor for general purpose computation, not restricted to graphics processing and output.

Lol I don't know who bought that. Anyone who wanted 3D was buying Voodoo, RIVA TNT or ATI RAGE. Everyone else was happily 2D and running Word 95.

But, to clarify, I was speaking of consumer/mobile. The original iPhone was quite revolutionary for having a decent PowerVR graphics chip. High end symbian phones just had a CPU. See for example https://en.wikipedia.org/wiki/Nokia_6110_Navigator or https://en.wikipedia.org/wiki/Motorola_Razr2

Even though GPGPU was already big in 2008, people still thought of it as a difficult to use coprocessor for big compute jobs. Much as people consider FPGAs now.


I didn't highlight it as a desirable 3d processor, I pointed it out because 3d was essentially becoming default at that point.

And the first iPhone shipping with a powerful graphics chip is a counter to your argument that the future of mobile wasn't clear. The people with the ideas wanted a graphics processor.


The fact that the entire industry apart from Apple (a computer company, not a phone/device company) were completely ignoring 3D exactly shows that it was NOT obvious or clear. After Apple demonstrated the potential it became clear.

Specialized coprocessors are super common but they're basically all ASICs with nary an FPGA to be found. In theory you could have reconfigurable co-processors on a mainstream chip but nobody does that - partially because the latency involved in reconfiguring the FPGA makes it a losing proposition in most cases.

There are uses for FPGAs where there's enough money at stake for the hardware development but the number of units is small - stuff like high frequency trading or many defense roles. Or in the development of new hardware. But it's pretty niche.


at SC and CUG this year the main focus was that in less than 10 years IPC improvement is going to go away (I am not sure where IPC improvement was in the last three years). Anyway, the next step is to make CPUs as much heterogeneous as possible like a SOC. Both Intel and AMD are going there but we need to sit and see which direction is going get the momentom like what nVidia has done.

The new ARM AI processor looks like it'll play well in this space for some workloads:

https://www.nextplatform.com/2018/08/22/arm-stands-on-should...

Not a thing for everyday desktops, but looks like compute competition is er... heating up. :)


It's remarkable how little we've seen from that acquisition. It's perfectly possible that Intel has butchered the acquisition the same way they have with many others.

The two huge companies being merged is more often considered fail than otherwise.

> a 2004 study by Bain & Company found that 70 percent of mergers failed to increase shareholder value. More recently, a 2007 study by Hay Group and the Sorbonne found that more than 90 percent of mergers in Europe fail to reach financial goals.

http://edition.cnn.com/2009/BUSINESS/05/21/merger.marriage/

Especially when the merge should be deep and involve engineering teams with different cultures to join and work together on the product. So I'd consider the release of first Xeon+FPGA after 3 years past acquisition as a somewhat success.


Billion Dollar Lessons by Mui & Carroll goes through a lot of these grand strategies and demonstrates how much of a bonfire they turned out to be.

The Xeon + FPGA that was actually underway before the acquisition and based on pre-acquisition technology (Arria 10).

Computer hardware has a very long lead time between product concept and metal-in-your-hand. Combined with the pains and huge initial slowdown of a megacorp purchasing a medium-corp, I think the real fruits of that acquisition are yet to be seen. I bet it took at least a year just for management to get their bearings on straight.

I would have guessed additional lead time for Altera to move their designs from TSMC to Intel process, but it looks like Altera has been planning to fab on Intels 14nm since 2013[0].

[0]http://chipdesignmag.com/display.php?articleId=5215


Equally troubling for Intel is the fact that they're losing their advantage in fabrication. The performance of their architecture is going backwards with every microcode patch, while their move to 10nm is hugely delayed.

https://www.tomshardware.co.uk/intel-cpu-10nm-earnings-amd,n...


I'm kinda curious to see how Zhaoxin, Hygon and other manufacturers in the x86_64 space are going to play out. It would be nice to see some real competition, not just Intel vs AMD, in this space.

There are other services like Packet that offer bare-metal hosting on small Atom and ARM processors. It'd be nice to see some alt x86 processors in this space.


Hygon is just AMD's EPYC CPU [0]. I am uncertain if they will differentiate themselves more as time goes on, but at the moment almost the only difference in the kernel is that it has a different name.

I think you're more likely to see some ARM CPUs which have comparable performance to low-end and mid-range x86 before you see a new x86 competitor [1] - the overhead to making x86 perform well is just so high that I can't imagine anyone new bothering to get into the space. VIA has been in the market forever as the third seller of x86, and despite the theoretical benefits of entering into the datacenter CPU market, they've never made that leap (though I don't know enough about their history to know if they tried).

I'm hoping that ARM becoming competitive in the client CPU space ends up making the cross-compile overheads of enough drivers/kernel stuff that we can start to see some more diversity in the CPU market overall. I'm excited about RISC-V, especially now that they have shipping hardware you can play with today [2]. The Mill CPU sounds super cool in a bunch of ways, but the architecture has made so many decisions that I'm unsure will play out in practice I'm holding my excitement until I see real silicon somewhere [3]

[0] https://arstechnica.com/information-technology/2018/07/china...

[1] https://www.engadget.com/2018/08/16/arm-says-chips-will-outp...

[2] https://www.sifive.com/products/hifive-unleashed/

[3] https://en.wikipedia.org/wiki/Mill_architecture


> despite the theoretical benefits of entering into the datacenter CPU market, they've never made that leap (though I don't know enough about their history to know if they tried).

They haven't, they instead entered the niche of low-cost kiosk hardware. Intel and AMD completely abandoned it due to lower profit margins, but plenty of raw sales numbers to help Via float by.


My own speculation on this is that it's a complicated legal dance to allow a native to and manufactured in China "trusted" processor that is tolerable for use in higher security government (Chinese) computers and systems.

They might have also baked in slight tweaks or customized whatever back-doors could be included if such things exist...

It's better to think of it as a Chinese subsidiary in a franchise system.


>Before Zen, we all kind of assumed they were so far ahead that AMD were more likely to be out of business before they would ever be a credible threat again.

It depends where about on the timeline. AMD's hired of Lisa Su and Jim Keller in 2012, we all thought it was too little too late. Look back at the Roadmap Intel were giving at the time, I used to joke about Tick Tock were like the sound of AMD's death clock. In 2012 we were looking at 10nm in 2016, 7nm in 2018, and 5nm in 2020. We just had Sandy Bridge, but that was the last big IPC improvement we have had.

Fast forward to 2018 / 2019, No 10nm, and I would have been happy if they were selling me 14nm++++++ Quad Core Sandy Bridge. Broadwell and Skylake brings nothing much substantial. Intel were suppose to break the ARM Mobile Market with tour de force, and that didn't happen.

We all assumed Intel had many other tricks up its sleeves, new uArch or 10nm waiting in the wings when things are needed. Turns out they have nothing. Why did they buy McAfee?( Which has been sold off already ) And Infineon? Nearly eight years after the acquisition they are just about to ship their first Mobile Baseband made by their own Fabs, Eight Years! What an achievement! Nearly three years after their acquisition of Altera, which itself has been previously working with Intel Custom Fab before that. What do they have to show?

During that time, the Smartphone revolution scale has helped Pure Play Fab like TSMC to made enough profits and fund their R&D rivalling Intel. And in a few more weeks time we will have an TSMC / Apple 7nm node shipping in millions of units per week. In terms of HVM and leading node, making TSMC over taking Intel for the first time in history. AMD has been executing well following their Roadmap, and Lisa Su did not disappoint. Nothing on those slides were marketing speaks or tricks that Intel used. No over hyped performance improvement, but promise of incremental progress. She reminds me of Pat Gelsinger from Intel, Down to Earth, and telling the truth.

Judging from the Results though, AMD aren't making enough of dent in OEM and enterprise. Well I guess if you are not buying those CPU with your own money, why would you not buy Intel? The consumer market and Small Web Hosting market though seems to be doing better, where the owners are paying. I hope Zen 2 will make enough improvement and change those people's mind, better IPC, better memory controller.

If you loathe Intel after all the lies they have been telling and marketing speaks, you should buy AMD.

If you love Intel still after all, you should still buy AMD, teach them a painful lesson to wake them up.


Dont forget Puma series of broadband modem chipsets bought from Texas Instruments in 2010. All defective, 3 generations and hardware is still not fixed, just this month Intel released some half assed software patches.

https://www.theregister.co.uk/2017/08/09/intel_puma_modem_wo...


Broadwell and Skylake brings nothing much substantial.

Ironically, Broadwell's 128MB L4 cache did bring a substantial performance boost to a whole range of real-world applications, but it seems it's so expensive to manufacture that they've subsequently dropped the feature except for Apple's iMacs and expensive laptops.


Haswell massively improved the branch predictor, which gave a significant IPC boost to many real-world workloads (especially emulation and interpreters).

How much of that boost remains after patches to Spectre and meltdown?

I have an inexpensive laptop with i3-6157u, with 64MB eDRAM. And yes, performance boost is substantial, for e.g. C++ compiler.

> If you loathe Intel after all the lies they have been telling and marketing speaks, you should buy AMD.

> If you love Intel still after all, you should still buy AMD, teach them a painful lesson to wake them up.

But how do I choose which AMD CPU I need ? Back in my youth p4 and athlon were easy to compare (freq., IPS and a modifier because AMD) but now I can't even tell the differences between any i5/3/7 and when I look at AMD names it's as confusing but with a different lingo. I feel the same regarding GPU so maybe I am too old for that now.


On the AMD side, there's only 2 generations of Ryzen.

Ryzen 1 - slow, medium, fast, elite

Ryzen 2 - slow, medium, fast, elite

pick the one you need based on pricing/discounts if any. It's not that hard really.


Thanks but... bear with me https://www.tomshardware.fr/articles/comparatif-cpu-amd-inte... what are those ryzen 3/5/7 TR ?

13xx, 15xx, 17xx, ThreadRipper.

For gen 2, that'd be:

23xx, 25xx, 27xx, ThreadRipper.

I think they picked the names/numbers to show some kind of equivalence with i3/5/7, but that's not quite it.


Don't forget the 1700X and 1800X.

ThreadRipper is interesting. It requires a different socket than the other desktop processors and is targeted more at workstation class machines.

In servers there are Epyc and Epyc 2.


ThreadRipper seems more like a Xeon competitor to me. Something for the server room crowd.

Epyc is the server Xeon competitor, Threadripper is a workstation cpu.

That'd be EPYC, their server offering.

Threadripper is their workstation CPU.


That's exactly it

slow (3), medium (5), fast (7), elite (Threadripper)

If you don't want to spend more than USD$800 on the CPU alone, don't look at the elite/threadripper line.

What is your budget? That is the first question you need to answer instead of trying to understand the entire line of chips, look at how much you have to spend, and then find the fastest one within the budget.

It's useless to try to think about the entire line of CPU's when you will only buy 1 chip (unless you are representing Dell and need to buy thousands of chips)


Compare the benchmarks for those processors?

Ah, but now we're not allowed to benchmark Intel anymore, so nobody can prove AMD is faster now.

AMD was faster depending on the workload. With these patches, I think AMD being faster is just a given regardless of what you're doing.

> but now I can't even tell the differences between any i5/3/7 and when I look at AMD names it's as confusing but with a different lingo.

They way I see it, I could look at AMD's Core count, Threads and Frequency, As they are clearly labelled, and that is it. On Intel's side you have features turned on and off for different i3/5/7/9, AVX speed difference etc I don't even want to bother looking it up.


> On Intel's side you have features turned on and off for different i3/5/7/9, AVX speed difference etc I don't even want to bother looking it up.

Seriously, Intel has always been way too confusing with their processor lineup. AMD has always been straightforward: leave in the kitchen sink on nearly every CPU and performance scales with price. Not linearly of course, but it's much simpler to choose an AMD CPU.


I don’t get it. You just look at popular apps&games benchmarks and decide your hw stack. You don’t need to synth-bench AVX differences unless you’re writing bleeding edge specialized software that depends on it.

I write all kinds of different types of software. The feature matrix of Intel CPUs has always been remarkably inconsistent, like you get AVX, but don't get virtualization extensions, or vice versa, etc.

Basically, if you don't want the headache you just buy one of the highest end/most expensive CPUs on offer and you're probably fine. With AMD the feature set is pretty consistent, so you have considerably more choice to find a good price point.


I just tried to recycle my old i7-4770k from my gaming CPU to a home server. Only guess what? The 4770k doesn't have VT-d. The cheaper 4770 does have VT-d, so why did the more expensive, "upgraded" 4770k not have VT-d? Because Intel's product line is a disaster.

We're not talking minor performance differences in features, we're talking features randomly added and removed for no logical reason from the same generation & tier of CPU model.


Perfect example of what I was talking about. Their lineup has always seemed customized to maximize the need to buy more CPUs.

"Gamers need these features, but gamers often also setup servers. Let's remove server features from gaming CPUs so they can't reuse them when they upgrade, and so they have to buy new server CPUs!"


Intel no longer just has i3/5/7-- celeron, pentium, i3/5/7/9 and plus variants. Within those are dozens of variants differentiated on things like PCIe bandwidth, SMT, instruction sets, accessible RAM, and so on.

In comparison, AMD's offerings are surprisingly easy. There's a handful of SKUs differentiated on core count and frequency. Generally they all have the same PCIe lanes, RAM access, SMT, instruction sets, and so on.


The right answer to this is to look into benchmarks. It now works again to compare Intel and AMD clocks, but only of the current generation, and then there is core count and motherboard prices to consider, so on.

A project of mine is a hardware recommender, it also includes a meta-benchmark. I collect published benchmarks and build a globally sorted order of processors out of it. https://www.pc-kombo.com/benchmark/games/cpu for games, https://www.pc-kombo.com/benchmark/apps/cpu for application workloads (that one still misses a bit of work, the gaming benchmark is better). Legacy processors are greyed out, so this might be a good starting point for you. There is also a benchmark for gpus.

For most people this processor choice is also very easy, it is "Get a Ryzen 5 2600 or an Intel Core i5-8400."

Feel free to ask if you want some custom recommendations, email is in profile :)


Pretty cool project. Thanks :).

Can you add the ability to sort based on price/perf?

Also, the existing bar graph is unclear to me. What does 10/10 mean?


10 is just the fastest. Because it is a meta-benchmark, this rating is not necessarily relative performance, it is based on the position in the ordering. The achieved average FPS is just a factor in that, used to make the distance bigger to indicate performance jumps.

Example, fictional values: The 8700K is the fastest, because it was most often seen as the benchmark leader. It gets a 10. The 8600K has almost the same FPS, but it was always a bit slower, it gets a 9.9. The i5-8500 comes next, but its average FPS scaled to the 0-10 scale is lower, it gets a 8.7. Then the i5-8400, always seen as slower than the 8500 in benchmarks, would at most be able to get a 8.6, no matter what the average FPS says (with enough benchmarks average FPS become an almost meaningless metric, it's the position in the benchmark that counts).

That's why it is not possible to calculate price/performance with this. I could only highlight good deals, processors that have a high position despite being cheaper than the processors below. Which is of course already baked into the logic of the recommender.


Thought a bit more about this. What I can do is filter out all processors/gpus that are slower than cheaper processors. Proof of concept implementation: https://www.pc-kombo.com/benchmark/games/cpu?optimize=true. Those are essentially the best price/performance choices, with some restrictions as explained in the other comment.

My next CPU in few years probably still will be Intel. I need fastest one-core performance in the world and Intel can provide me 5+ GHz and best IPC. I want honest multi-core CPU, not NUMA and Intel will give me 8 honest cores. Meltdown fixes should be in hardware by then, so performance won't be affected and if it is, I'll disable those fixes, I don't need them anyway. But CPU after that might very well be AMD.

> Clearly, they don't have an answer to AMD at all. If this is true, their shareholders should be asking serious questions about why they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.

But they have lots to show for that money!

Brand change implementations such as "Gold" and "Platinum", which gouge the customers more than ever before.


Intel are still ahead in low power mobile (laptop) CPU race, and the clear leader in the laptop market. Who knows for how long, though?

>Intel are still ahead in low power mobile CPU race,

I never met a person with a smart phone that uses an intel chip. They probably exist but i know none.

Apple aren't using them. Samsung aren't.. HTC nope.. Google pixel nope...

Intel basically sold off/scuttled their mobile division right before the iphone took off.


I think “mobile” was supposed to refer to laptops, not phones or tablets.

I meant "mobile" as in laptop CPUs, I've edited my post to clarify.

My apologies, you're right about laptops.

I think it's referring to laptops. That said I had the ASUS Zenfone 2 and the performance seemed alright, but certain apps didn't work because of the architecture difference, most notably Pokemon Go for the whole time it was popular (although it's probably for the better that I missed that whole craze).

oh yeah i forgot laptops.

I had a Zenfone 2 for a while too, performance and battery life were both pretty good. Thing was super unreliable though, not sure for what reason.

FWIW, I owned an ASUS Zen Phone 2 which contained an Atom x86 processor. It worked pretty well. I sold it to a friend of a friend who I think still uses it today. That said, that processor line was discontinued.

New Snapdragon 845-based Chromebook is on the way.

> It comes as Microsoft continues its work with Qualcomm to optimize Windows for devices powered by Qualcomm's Snapdragon chips, including the forthcoming Snapdragon 850, which Samsung used for its first Arm-based Windows 10 laptop. So it appears there is some momentum behind the concept.

https://www.zdnet.com/article/arm-on-windows-10-chromebooks-...


I think ARM could be a viable contender in the low-power laptop segment, kinda maybe possibly.

However, I used to own a Tegra K1-based Chromebook, and that thing was sl-o-o-o-o-o-o-w, and it only got worse with successive updates. I'm not really optimistic when it comes to performance, absent highly-optimized apps. The state of Firefox and Chrome doesn't really fill me with confidence.


The state of Firefox and Chrome shows more than anything else that the many cores is good thing and especially that RAM size is the king actually. Modern smartphones with 6 and 8 GB of RAM make a difference.

I think Apple laptop CPUs will be like USB-C. First 'what's the point, nothing is compatible', year later 'everyone is doing it and it's actually pretty cool to attach everything including power with a single cheap dongle'.

I wasn't aware we were there with USB-C. I still need a dongle every time I use anything. But more importantly, I was in the camp of "I barely connect anything to my computer other than power, and MagSafe worked so well I forgot it was ever an issue, but then "USB-C charging" regressed things to the point where I thought I was charging but nope, turns out the connector was slightly out and my battery is at 5%".

I think it would take far longer than a year for a critical mass of OS X software to become ARM-compatible, but it could be viable in the long run.

I'd like for ARM laptops to become more popular though, as a Debian user nearly all the software I use is already there.


Apple already got a huge number of developers to transition their apps to 64-bit ARM with the iPhone, I feel like if they made it simple enough they could get a critical mass on the desktop just as quickly.

Not at all, they bungled low power bigly with the atom trainwreck and are so far behind ARM now that they have given up competing.

I don't see very many competitive AMD- or ARM-based laptops around, though.

After reading a few articles, here's what I got:

it seems that Intel couldn't jump into EUV manufacturing when they were in full domination because it was too expensive and new so they started improving multipatterning so improve resolution, which proved too hard to ship (hence the delays) meanwhile smaller players went their way until recently and now EUV is accessible so they can jump in swiftly while intel is still caught inside his intermediate strategy, lazy market behavior and unforeseen failures. They also have EUV planned but not until the next-generation. Note that even at larger pitch their process is near competitive with smaller ones today, but it sounds terrible.


They did hire Keller, no? I wonder if AMD has an answer to AMD in the coming years. Not only CPU, GPU as well. Especially GPU. They missed the boat on AI, and now Nvidia is pushing RTRT left and right. That's seemingly two areas they still don't have an answer to. It's a big battlefront, and intel is only a step or two away from possibly bringing an answer to AMD. Not to mention whole laptop domination by intel, and mobile by ARM.

> Not to mention whole laptop domination by intel, and mobile by ARM

traditionally yes, intel has absolutely dominated the laptop market however i have been seeing a lot more laptops lately with a Ryzen processor and Vega graphics

AMD making gains into a very lucrative market


Here's a Stratechery article, nothing more to add. https://stratechery.com/2018/intel-and-the-danger-of-integra...

Most of the emperors out there have no clothes.

I just want to throw it out there that it might be impossible for AMD to actually go out of business. They're Intel's only major x86 competitor, IIRC their x86 license is non-transferrable, and no way Intel would be permitted in the USA to have an actual monopoly on x86 production.

I kick myself for not buying AMD at $2 (or buying a LOT more at $5).


> I just want to throw it out there that it might be impossible for AMD to actually go out of business.

Even if you buy this, (I don't), there's no point for them to stay in business with absolutely non-competitive products. The remarkable thing is that thanks to Zen that did not happen and Intel actually feels some heat for the first time in years.


> They're Intel's only major x86 competitor, IIRC their x86 license is non-transferrable, and no way Intel would be permitted in the USA to have an actual monopoly on x86 production.

I just looked this up, and it seems to boil down to the patent cross-licensing agreement between Intel, which developed the x86 architecture, and AMD, which developed the 64-bit instruction set. I don't think there's a unilateral "non-transferable license" per se — and they're free to enter into a new agreement if either party does get acquired.

This Reddit thread seems pretty good at explaining it in much more detail: https://www.reddit.com/r/hardware/comments/3b0ytk/discussion...


> shareholders should be asking serious questions about why they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.

This might be the answer. No competition and a good cash flow is a comfortable position. You defend this model with ads, policy, etc. and technical innovation can languish. I am not saying this is necessarily the case, but it is possible that Intel just got comfortable, slow and fat. Having a scrappy, smart competitor can be a good thing.


Pretty stupid that they didn't continue their Itanium architecture. It does have In-Order-Branch-Prediction and we are at a point again where a significant number of users don't really care what kind of CPU they are using.

ARM and RISC-V have become a serious thread and are on the way to get standardized ecosystems...


> The security fixes are known to significantly slow down Intel processors, which won’t just disappoint customers and reduce the public regard of Intel, it will probably lead to lawsuits (if it hasn’t already). Suddenly having processors that are perhaps 5% to 10% slower, if they are to be secure, is a significant damage to many companies that run server farms or provide cloud services.

Maybe I'm missing something here, but I was under the impression that the Spectre/Meltdown mitigations have a big performance penalty, but the more recent L1TF mitigations should have little or no impact, and that the new license only showed up recently on the new L1TF mitigation patches.

Is the L1TF mitigation actually a lot worse than I thought, or does this license apply to the earlier Spectre/Meltdown patches, or is Bruce Perens just being sloppy and conflating the two?

Either way, I agree with him that draconian license terms shouldn't be attached to bug fixes.


> Is the L1TF mitigation actually a lot worse than I thought, or does this license apply to the earlier Spectre/Meltdown patches, or is Bruce Perens just being sloppy and conflating the two?

He isn't being sloppy. According to the Debian package maintainer[1], the new license only applies to the new patches (ones after 2018-08-07) which were released long after the Spectre/Meltdown ones -- because the license was only changed in the 20180807 microcode update (and because Debian didn't block the previous Spectre/Meltdown releases over license concerns).

In theory, nobody can actually tell you how bad the L1TF mitigation is because of the new license terms (a comparison before-and-after L1TF mitigation would be providing you with comparison test results).

[1]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=906158#14


Simple solution in comments: [ i7-8750H ] User1: do benchmark on no patch Os post in thread User2: do benchmark on patched OS, post in thread

Is not a compare only performance graph between two users computer remember USER1 is patched


you would have to demonstrate that the config from user1 is identical to the configuration of user2, wich mean you would have to split the patch up to the microde update.

Even if everyone plays by the rules, can't someone in another jurisdiction publish comparison results? How do they expect this to work?

I'm curious to know how valid this is in EU countries. Anyone know?

Against consumers, almost certainly unenforcable under the Unfair Terms in Consumer Contracts Directive alone, nevermind that some countries have even stricter laws regarding unfair contract terms.

[1] https://en.wikipedia.org/wiki/Unfair_Terms_in_Consumer_Contr...


I'm done with Intel

Time to disable automatic Windows Updates.

Windows automatic updates are one of the easier ways to get the new microcode without agreeing to the new license terms. But there are plenty of other reasons to disable Windows Update (not that Windows will respect your decision).

How do you keep the system secure without Windows Update enabled?

All of the Windows machines that I use for work are firewalled and have no internet access, primarily so that they won't interrupt my work with fucking updates, but security is a nice side effect. My personal Windows machine has updates enabled because I don't mind if it reboots when my back is turned, and my Linux and OS X machines can be trusted to ask before rebooting to apply updates.

Microsoft has provided no real alternatives for those of us who need to be able to keep a Windows machine running overnight.


2018 has been an abysmal year for Intel so far. Multiple serious vulnerabilities that effect multiple areas of their products, Spectre, Meltdown, Management Engine, etc. The only thing they can control is how they respond and they've done a terrible job of that too. At this rate I'm expecting a consumer product agency to eventually get involved.

They also got rid of their CEO this year.

This borders on unbelievable...

I checked at intel directly just to make sure this is true: https://downloadcenter.intel.com/download/28039/Linux-Proces... The file https://downloadmirror.intel.com/28039/eng/microcode-2018080... contains the license file with that laughable clause included.

Now hand me the popcorn...


I would like to see the New York Times co publish benchmarks with phoronix (or whomever has the relevant expertise and credibility) with a box detailing the ridiculous license and an editorial suggesting Intel investors may have cause for concern about managerial competence.

Someone with a publishing arm at their disposal is surely shorting Intel as we speak, ready to publish just such a story?

> Since some similar exploits have been discovered for AMD and ARM CPUs, the answer is probably “no”. But certainly customers are upset.

Whats to be upset? Don't update if you are upset. Choose between perf/security. What are the options, anyway? You can be upset that the things are the way they are, however you can't blame Intel/AMD/ARM, etc. You should have been upset if these vulerabilities were known and not fixed thou.


People are upset because Intel is not allowing people to run benchmarks on their CPUs (the language is so vague that you could argue that running non-CPU benchmarks, or benchmarks for other software unrelated to Intel would violate this license). So you can't really make a "choice between performance/security", because nobody is allowed to publish data that would let you make an informed choice.

Quoted text talks about being upset because of perf, not because of Intel not allowing benchmarks.

Not allowing benchmarks - yeah, agree to that, that is a reason to be dissapointed or upset.

But people throw words around: lawsuits, upset, etc.


All the recent microcode patches to fix CPU security flaws have caused performance degradation, so it's fair to mention that people have been upset about this in the past (anecdotally, I know first-hand that people have disabled the patches because it's started to break software that has significant timeouts -- and they were obviously not happy that this was necessary).

It's also (somewhat) fair to assume that this patch would also affect performance until proven otherwise, and Intel changing their license to disallow sharing comparative performance tests doesn't give me much faith that I'm wrong in that assumption.


I expect them to fix the security issues, and I expect them to allow me to both know and publish the performance effect of the fix. If the security issues are not fixed, I'm upset. If they are fixed but I'm not allowed to talk about the effects, then I'm more upset.

Here is one thing we can do about it: make public service announcement to our users that we no longer recommend Intel CPUs because of security holes, censorship and crippled performance.

I am going to do that today. While we only have several thousand users they do CPU intensive work, buy a lot of new CPUs and rent a lot of servers. My small contribution will likely amount to low-mid 6 figures out of Intel pocket in coming 2-3 years.

Please consider such announcement if you could do some damage as well.


That's one of the best ways to actually make a change.

This amounts to little more than making a statement at the expense of your users.

It would've carried that much more weight if it were _your_ low-mid 6 figures that were redirected away from Intel.


It's Intel's policy and silliness that is at the expense of this person's users. Making those users aware of it is absolutely defensible on the grounds of it being the right thing to do. Intel are relying on customers ignorance, in fact they're trying to contractually ensure it! The Streisand effect is exactly what they deserve.

Intel isn't really that much far ahead to say that users are going to be negatively affected.

Intel is actually behind in performance/price ration by a wide margin at least on workloads which can make use of many cores. The margin is likely getting bigger after the latest patches.

It will likely benefit their users. AMD's EPYC processors are cheaper for the same performance as intel's, as well as allowing more memory per processor.

As they allow more cores per socket this can also often massively reduce per-socket licensing cost, if you have the misfortune of using software which requires that.


Well, AMD currently has better CPUs for high-performance computing, so OP's users would benefit from this public service announcement, if anything.

I thought it was the opposite of that? That is, AMD chips are currently great for most workloads but Intel still has a definite edge in working with big vectors of floating point numbers of the sort you usually encounter in HPC. Mostly because their core's vector units are 512 instead of 256 bits wide.

Skylake-X and certain Xeons have avx-512 (which includes two 512 bit fma units). The rest only have 256 bit wide vectors, like Ryzen. But they still have the advantage of two 256 bit fma units, while Ryzen instead relies on two 128 bit fma units, meaning the fma instructions critical to matrix operations are faster on Intel.

I think the idea for HPC though is that you want to offload these highly vectorizable operations to a GPU. Or maybe you're doing a lot of Monte Carlo that it is hard to vectorize.

I really do like avx-512 though. If you're writing a lot of your own code, and that code involves many small-scale operations and has control flow (like in many Monte Carlo simulations), it's a lot easier than mucking with a GPU. If you're using gcc though, be sure to add `-mprefer-vector-width=512`, otherwise it will default to 256 bit vectors (clang and Julia use 512 by default).


Honest question: is AMD any better? Do they somehow manage to avoid Spectre / Meltdown without a slowdown?

AMD is better, they are still affected by side channel attacks but they did not skipped on security checks and are not affected by speculative execution AFAIK

https://www.amd.com/en/corporate/security-updates


> ...and are not affected by speculative execution

AMD's chips definitely speculatively execute instructions. It's a common performance trick.

AMD's chips also definitely throw an exception at retirement (of course) for instructions that attempt to load a privileged address, just like Intel's chips do.

The difference is that when AMD's chips see a load instruction, the load isn't executed until it knows that the address isn't privileged. Intel's chips do execute the load (but then throw away the result when it realises the address was privileged).

The speculative part is for instructions that depend on the result of the load.


>The difference is that when AMD's chips see a load instruction, the load isn't executed until it knows that the address isn't privileged. Intel's chips do execute the load (but then throw away the result when it realises the address was privileged)

Thank you for the detailed explanation, can't we conclude that AMD engineering is more defensive then Intel, this is what I concluded.


Based on this datapoint? I don't think so. They're vulnerable to other side-channel attacks.

Both AMD and Intel hire really smart people, but this stuff is really, really hard.


The general reaction from CPU designers I interact with online after this was that Intel engineers should have known the optimizations that enabled Meltdown were dangerous but that Specter was totally unforeseeable.

Yes. Their arch is just different, they where not affected by meltdown and the chances of a spektre exploit actually working are very small. I don't know why that is the case but basically amd is 99% free from this story.

They aren't. Spectre works fine on AMD. They avoided Meltdown only because their cores are less optimised. Meltdown isn't Intel specific - Apple cores were also affected.

It's clear that AMD weren't doing anything special w.r.t. side channel attacks. They were just further behind in the optimisation race and as a consequence, were less hit.


Can you really call "fast but wrong/insecure" optimized?

Yes you can, if it makes your benchmarks look better. That's why Intel is trying to suppress the benchmarks.

It doesn't calculate anything wrong, so sure.

For this particular optimization, it looks better not to have it.

But in general, knowing how something is flawed lets you mitigate the flaws. We use floating point despite it being mathematically wrong, because it's fast and we can mitigate the wrongness. I could imagine a chip where speculation could be toggled between different modes, if there was enough demand.

I will say that "just further behind" is probably wrong. AMD has a lot of optimizations in their chips. They have safer ones, which might be luck or might be engineering talent, but it's not a mere artifact of being behind the curve.


That is a horrifically biased & wrong summary.

AMD enforce privilege checks at access time rather than at retirement time. Whether or not this is due to "lack of optimization" or "good security engineering" nobody knows. But your claims that this was purely the result of "less optimised [sic]" cores is nonsense. You have zero evidence whatsoever that that was the case vs. AMD just having superior engineering on this particular aspect and not adding bugs to their architecture.

All we know are that Intel & ARM CPUs have an entire category of security bugs that AMD don't, and that upon close analysis AMD's CPUs are operating in a more secure foundation.


Minor tidbit, but "optimised" is correct British spelling, so the disdainful "[sic]" is not needed.

The summary is correct and optimised is the correct spelling where I'm from.

If AMD were making conscious efforts to avoid side channel attacks they'd already have had various features to show for it like IBRS. But AMD's chips say nothing about side channel attacks. Their manuals do not discuss Spectre attacks. And there is no evidence anywhere that they knew about Meltdown type attacks and chose to avoid them.

I get that suddenly hating on Intel is cool and popular, but the facts remain. There is no reason to believe AMD has any advantage here.


For meltdown AMD followed what the x86 spec said. It's flabbergasting you are trying to contort this into making AMD look bad. Your "summary" was basically that AMD was too incompetent to have multiple security issues.

The facts are AMD does not have a significant per-core IPC defeceit vs. Intel (as supported by every ryzen review at this point) and AMD has, on multiple occasions now, had objectively superior security.

You're trying to twist this into a negative against AMD. It's nonsense FUD. Intel fucked up, AMD didn't. Why are you trying to run PR damage control for Intel?


Please show me where in the x86 spec side channel attacks are discussed at all? They aren't.

I am really unsure where you're getting this from. Your reference to the spec makes me wonder if you really understand what Meltdown and Spectre are. They aren't bugs in the chips even though some such issues may be fixable with chip changes, because no CPU has ever claimed to be resistant to side channel attacks of any form. Meltdown doesn't work by exploiting spec violations or actual failures of any built in security logic, which is why - like Spectre - it surfaces in Apple chips too. Like Spectre it's a side channel attack.

I'm not trying to twist this into a negative for AMD: it's the other way around, you are trying to twist this into a positive for them, although no CPU company has done any work on mitigation of micro-architectural side channel attacks.

I'm simply trying to ensure readers of this discussion understand what's truly happening and do not draw erroneous conclusions about AMDs competence or understanding of side channel attacks. What you're attempting to do here is read far more into a lucky escape than is really warranted.


Meltdown was enabled by a clear-cut violating of the memory access restrictions of x86. Intel simply did the permission check at the wrong point in the pipeline. It's not anything more obscure or clever than that, and it wasn't even an optimization as the end-to-end latency remains the same. It still did all the work, it just did it in the wrong order. Permission check was done after read access was done anyway.

Memory was accessed that the spec says was not accessible. This has nothing to do with side-channels. The side channel part of the attack was how the spec violation was exploited.

For Meltdown specifically Intel fucked up, AMD didn't. This is not at all vague. Whether or not this was due to luck or not is irrelevant, it was clearly NOT due to incompetence as you were pushing. You pushed a narrative that AMD was too incompetent ("missing optimization") to have a severe security bug. That has nothing to do with reality whatsoever.

Side note, side channel attacks are not exactly obscure. Guaranteed AMD & Intel have security experts that are well aware of how side channel attacks work long before any of meltdown & spectre came to light. They have been around for ages. Practical exploits of L1/L2 cache via mechanisms like Prime+Probe date back at least as far as 2005: https://eprint.iacr.org/2005/271.pdf


AMD are not employing these anti-competitive practices (at least at the moment).

I think GP was asking not about the anticompetitive practices but about the actual exploit that Intel is responding to. But you make a good point.

Intel has two faults: one, they made a significant mistake in their chip design, and two, they responded to criticism poorly. AMD did not make the mistake and has responded well to criticism.


I am in the market for a new rig- looks like I'll be going with AMD.

same. i _was_ looking at getting an HP Spectre x360 with the i7 8550U. Now, I'm going to look explicitly at laptops with Ryzen chips. I think the Spectre might have a Ryzen model, but don't know.

AMD is severely behind on the Laptop space.

In the Desktop space, I can definitely recommend AMD. But Laptops are totally Intel right now.

There are AMD Laptops, but they are hard to recommend. Most are low-end offerings at best (laptop manufacturers don't put the high-end stuff with AMD). As long as high-end HP / Dell / Lenovo / Asus / Acer / Apple / Microsoft Surface are all Intel-based, you're basically forced to use an Intel Laptop.

Desktops... you can build your own. And even if you couldn't, there are a ton of custom computer builders out there making great AMD Desktop rigs.

-------------

With that being said, AMD laptops are competitive in the $500 to $750 space. Low-end to mid-tier laptops... the ones with poor mouse controls, driver issues, bad keyboards, low-end screens and such.

But hey, they're cheap.

Its not really AMD's fault. But in any case, its hard for me to find a good AMD laptop. So... that's just how it is.


the new HP Envy x360 13z doesn't look bad.

Kind of waiting to see what the Lenovo ThinkPad A285 looks like, as well.


The HP Envy x360 has a noticeably worse screen though. Its definitely not the same class or caliber as the Spectre x360.

That's what I'm talking about: most laptop manufacturers offer a "premium" Intel laptop. But then they have a lower-quality AMD one on the side.

Nothing AMD has done wrong per se, just laptop manufacturers refuse to sell AMD on the high-end.


Check out the Huawei Matebook D with Ryzen if you want an amazingly built laptop for a great price. Might change your mind, it's the best 600 dollar laptop out there.

i actually thought about that one too. If they had a 16GB version, I'd totally go for it. Entirely the reason I'm leaning towards an HP Envy x360 13z instead.

Is Intel not still leading in per-core performance? That's what I'll decide my next cpu on, since I care more about use cases like emulation/games (many games don't use multiple threads or use them effectively)

The problem is now it will depend on which patch(es) your CPU may have. Many online benchmarks and reviews of Intel CPUs vs AMD may be old, or run without patches applied / security modes enabled. If I order an Intel CPU now, I am not sure what microcode version I'll be getting to be honest. As a result, those benchmarks that show Intel edge out AMD a bit on some games may actually be inverted depending on these factors.

Going with AMD at this point is simpler, better for security, and if you care about this sort of thing -- rewards more honest and less consumer-predatory behavior. I've always gone with Intel my whole life, but given these many incidents with Intel, combined with their really poor public responses to it, I will now be switching to an AMD user for all future PC builds.


They're neck and neck at this point, within a few percentage points. Zen 2 should close that gap to negligible hopefully.

Good thing we still have a somewhat anonymous internet. I'd be surprised if there wasn't a benchmark or two on the HN front page by tomorrow.

Might even come from a media organization if their lawyers deem this sufficiently unlikely to be enforcable in their country.


It’s as though Intel is being sabotaged from inside. Delays with 10nm, suspicious exit of Krzanich, harebrained unenforceable licensing schemes. My next machine will use Threadripper and this kind of shit just seals the deal.

It's a clear invite to benchmark.

Amusingly, I don't think the summary in the article or article title correctly summarizes the legalese. What is forbidden seems to be to "publish or provide" benchmark results. So let's I boot my machine with the new microcode, run a benchmark, then reboot it with the old microcode and upload the results to my website (hosted on a machine that doesn't run the new microcode). I don't see how this violates the terms of the license.

In general, these terms doesn't seem to be very carefully phrased. Point (i), and independently point (iii), apparently prevent you from running the software altogether.


Can I publish benchmarks of 7zip, The Witcher 3 and OpenGPG from yesterday and tomorrow (after Intel patch, which was patched coincidentally)?

> (v) publish or provide any Software benchmark or comparison test results.

This seems to exclude any benchmark that may be affected by the Software's performance... Which means any CPU benchmark.

There is some play in the wording... And I'm taking the least favourable interpretation. But yeah... Seems like Intel have said no benchmarking, full stop.


  > Seems like Intel have said no benchmarking, full stop.
That doesn't mean it's enforceable. Companies say silly things all the time.

Absolutely.

If they tried to uphold that least-favourable interpretation, then I would be reaching for anti-competitive laws, and other consumer protections, and perhaps even contract law as they say downloading the software, so you can see the license, is binding.

It seems difficult to have a legal argument that it should be binding at all.


This is why it just sounded stupid to me when I read the headline. The CPU IS the computer, sure I won't run Cinebench, but I'll want to run my web browser, and there are things called browser benchmarks because I want to know if Chrome is better than Firefox, coincidentally running older versions of the software on older machines.

Will they sue me ?


>Will they sue me ?

Are you in US? If the answer is yes, I'm happy to publish the results for you on my blog where they won't reach me.


Yeah...super fishy. If you don’t want to show off your product something is wrong. If they released Speculative execution newly (in some alternate universe) they’d be touting the huge performance benefits and showing off a ton.

Couple months ago I got an AMD Ryzen for my homeserver. I am pretty happy with it. Next time I'll probably get an AMD for the desktop as well. But I am talking about 6 years at least

>Since the microcode is running for every instruction, this seems to be a use restriction on the entire processor

Does Intel's legal team even have a basic understanding of how computers work ? In essence, the word 'benchmark' ceases to exist after this microcode update. No more research in any domain. Back to hunting and gathering.


Intel are the experts in their own products. If they don’t have confidence in them, why should we?

Isn't the the license used for OEM that prevents Debian to upload the update? https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=906158

My thougt based on the HN headline "Intel Publishes Microcode Patches, No Benchmarking or Comparison Allowed" (after having read the article, of course):

Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website, where scientists can upload research (results, papers, ...) anonymously which they are are not allowed to do legally because of various such "research-restricting" laws.

---

UPDATE: Before people ask the potential question how the researchers are supposed to get their proper credit - my consideration is the following: Each of the researchers signs the paper with their own "public" key of a public-private key pair. This signature is uploaded as part of the paper upload. The "public" key is nevertheless kept secret by the researchers.

When the legal risk is over and the researchers want to disclose their contribution, they simply make their "public" key really public. This way, anyboy can change that the signature that was uploaded from beginning on, indeed belong to this public key and thus to the respective researcher.


Sleazy marketers would be all over that. :/

If they show reproducible results, then I don't see a problem. If they don't, well then you know it is either dodgy or not rigorous enough to take seriously.

They'd definitely show reproducible results. eg either get their mates to submit matching fakes ones, or just submit an extra set themselves after slight tweaking

That being said, as soon as someone with a clue comes along + tries them out and finds it's bogus... that would lead into potentially weird territory too. eg the dodgy submitters likely attempting to discredit the er... whistleblower(?).

Seems like a re-run of an old story. :/


That's not how you do signing at all. You just use your private key to sign something, publish the public key for verification, and, when you want to reveal yourself, you just sign "I am X" with that key.

OK, thanks for the correction. Much better, indeed. :-)

No problem! It's a very good strategy, it's what someone who wanted to remain anonymous would use.

Or... just have people outside the US do the benchmarking.

These kinds of clauses are likely effectively null, void or unenforcable in any country that has decent consumer protection laws or laws concerning anti-competetive practices.

You also can't just go and write whatever into a document that already has weak legal footing in many places - and especially not after I purchased your defective product. This shit won't hold for a minute in court.


Why would the legal risk ever be over?

> Why would the legal risk ever be over?

I can imagine some scenarios:

- A company is bought by another one which has a different legal policy and voids some legal restrictions even retroactively

- By some other way, the "illegal" knowledge in the paper became "public knowledge", so that company lawyers will have a hard(er) time convincing a judge that the respective paper is of any danger and thus the author to be prosecuted. For example: After these microcode patches, people will of course privately do benchmark the performance differences. So some years later, the order of performance differences are kind of public knowledge.

- With time, companies have a much less legal interest to sue people for disclosre. As long as there is a high commercial interest, companies can be very "sue-happy". On the other hand, each of these lawsuits is a PR risk for the company. If the respective product becomes outdated, the ecomonical advantage of a lawsuite is much less than the probable PR loss.

- The researcher now (later on) lives in a different country that has a different legal system where there is much less legal risk. For example, in Germany, it is very restricted what can be included in the general business terms (in opposite to the USA).

In all of these cases, it can become much less legally dangerous to disclose the real identity of the author. On the other hand, there exists an incentive (academic credit) to do so.


> Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website, where scientists can upload research (results, papers, ...) anonymously which they are are not allowed to do legally because of various such "research-restricting" laws.

I don't go there since 2008 but why not just wikileaks?


Wikileaks is a Russian propaganda machine.

They are only interested in Anti-America leaks, not all leaks.


> Wikileaks is a Russian propaganda machine.

> They are only interested in Anti-America leaks, not all leaks.

Everybody should make their own judgement on the political bias of Wikileaks (which is, in my personal opinion, a very good reason why monopolies are typically bad; in other words: there is a demand on multiple independent Wikileaks-like websites), but the statement that Wikileaks is only interested in Anti-America leaks does not hold in my opinion. See for example the following leak about Russia's mass surveillance apparatus:

> https://techcrunch.com/2017/09/19/wikileaks-releases-documen...

Or let us quote

> https://www.reddit.com/r/WikiLeaks/comments/5mv07m/has_wikil...

"They released Syrian/Russian email exchanges, I don't know if it led to anything extremely controversial. In an interview, Assange said the main reason they don't receive leaks from Russian whistle-blowers is that the whistle-blowers prefer to hand over documents to Russian-speaking leaking organization.

And Wikileaks doesn't have Russian-speakers in their organization. You can tell your friend that if he or she wants Wikileaks to release damaging Russian documents, then he or she should hack the Russian government and give what they find to WL."

Or have a look at

> https://www.reddit.com/r/WikiLeaks/comments/5mv07m/has_wikil...

On

> https://www.reddit.com/r/IAmA/comments/5c8u9l/we_are_the_wik...

you can find a list of various counter-arguments to common criticisms of WikiLeaks.


Edit: Seems the parent comment has now reached net positive votes

It's a really sad thing you're being downvoted, as I made a similar comment about a year ago [1] in defense of WikiLeaks to over 30 upvotes. HN seems to be getting more and more active with their downvotes towards information that doesn't fit their current perspective.

[1] = https://news.ycombinator.com/item?id=13816762


> It's a really sad thing you're being downvoted, as I made a similar comment about a year ago [1] in defense of WikiLeaks to over 30 upvotes. HN seems to be getting more and more active with their downvotes towards information that doesn't fit their current perspective.

> [1] = https://news.ycombinator.com/item?id=13816762

I don't see myself as a defender of WikiLeaks. It is, for example, hard not to admit that the Democratic National Committee email leak and the exchange between Donald Trump Jr. and WikiLeaks (at least to me) has some kind of "smell" [2].

My argument rather is:

a) the position of WikiLeaks (if there exists one) is far more confusing and sometimes self-contradictory than it looks on the surface (in this sense, I am both somewhat sceptical of the "WikiLeaks defenders" and the "WikiLekas prosecutors").

b) do not trust any side as "authoritative". Consider multiple different sources - in particular the ones that do not agree with your personal opinion - and conceive an opinion on the whole story.

[2] https://www.theatlantic.com/politics/archive/2017/11/the-sec...


Completely agree with that.

I just threw you into that category for the purpose of my comment as I believed it was a perceived defense of Wikileaks that you were being downvoted for, rather than something else regarding the content of your comment.


>Wikileaks is a Russian propaganda machine.

>They are only interested in Anti-America leaks, not all leaks.

Any evidence to prove this? If you go to Wikileaks there are leaks from all around the world. I personally am very glad Wikileaks exists, doing the work they do, and still has never been proven incorrect or fraudulent.

By the way, Anti-American leaks are also Pro-American leaks, even if they may not be favorable to your own political beliefs.


Ya, there's a huge niche in the market for pro-American leaks.

Nobody is asking for "pro-American" leaks. Wikileaks tried to present itself as a unbiased source of leaked news, the moment they refuse to leak negative news about a nation or individual they are showing a bias.

If you go on Wikileaks right now and search "Russia" for 2012-2018, you get mostly news about how western plans are going to fail and how powerful Russia's military is. If you search for Russia in the "Syria Files" section, you get no results. How is that even possible?


  > Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website
Good idea, but I doubt it's necessary here. I'm fairly sure this sort of EULA clause is unenforceable in many jurisdictions. Run the benchmarks in a country where they are legal.

If that's the case would it be ok to publish these results in US-based websites such as Phoronix, even if they're done by other country's citizens in that other country?

If Phoronix never agreed to the license, then I can't see how that wouldn't be legal. And otherwise it's going to be a good year for Europe-based benchmarkers.

"Doesn't this show that it is time for someone to set up some kind of ..."

I don't think this is necessary ...

I think "circumventing" this benchmarking restriction is as simple as having one person purchase an Intel CPU and just drop it on the ground somewhere ... and have another person pick it up off of the street (no purchase, no EULA, no agreement entered into) and decide to benchmark the found item against other items.

"I found a chip on the ground that had these markings and numbers on it and here is how it performed against an AMD model."


I think that one has to apply the microcode patches after startup. For this, you have to obtain the microcode patch file from somewhere. So I am not sure whether this "legal hack" will work.

why can't you drop a computer with the microcode patches already applied?

That's not the way. The tech press isn't powerless here.

The media that has been performing such benchmarks for decades and have thus earned a large and faithful audience can organize and simultaneously publish the relevant benchmarks in the US, complete with an unapologetic disclosure right at the top as to the why this is happening. Put Intel into the position of suing every significant member of the US tech media and create a 1st amendment case, or perhaps try to single out some member and create a living martyr to which we can offer our generous gofundme legal defense contributions over the decade it takes to progress through the courts. Either way Intel creates a PR disaster for itself.

Sack up and call their bluff. There are MILLIONS of people that will stand behind that courage. At some point the share holders will feel it and this nasty crap will end.


The press can just give intel a "Don't buy" rating with a comment that intel has something to hide, we don't know what but you would be a fool to even consider them when you can't know if they are any good. Technically I can't even show the latest intel is faster than a old i386 that I'm finally ready to replace, while I can show the AMD is better.

How is the 1st Amendment relevant when private parties deal with each other? (As far as I know, it's not.)

It's contract law and tort law that's relevant. Intel's defective product, Intel's ToS, and maybe fair use. (As the tester could get the patch without an explicit license and try to claim fair use.)


> UPDATE: Before people ask the potential question how the researchers are supposed to get their proper credit - my consideration is the following: Each of the researchers signs the paper with their own "public" key of a public-private key pair. This signature is uploaded as part of the paper upload. The "public" key is nevertheless kept secret by the researchers.

> When the legal risk is over and the researchers want to disclose their contribution, they simply make their "public" key really public. This way, anyboy can change that the signature that was uploaded from beginning on, indeed belong to this public key and thus to the respective researcher.

Commonly-used public key signature systems (RSA, ECDSA) do not provide the security properties necessary for this. Someone else who wants to claim credit for the research could cook up their own key pair that successfully validates the message and signature. This is called a duplicate signature key selection attack (https://www.agwa.name/blog/post/duplicate_signature_key_sele...)


After chmod775's comment (https://news.ycombinator.com/item?id=17825777), I already suspected something into that direction. Thanks for the independent confirmation.

How is this enforceable in Law - I mean surely free speech laws apply here? You can't be compelled by contracts like this to not talk, it's absurd.

It doesn't even seem like a particularly well written clause as it could be interpreted to mean benchmarking the microcode update process not the hardware.


I'm pretty sure they can expect future article titles to be Massive performance penalty with latest firmware patches ... Intel CPUs hit so hard by latest patches that Intel bans benchmarks.

Doesn't even need to have a content.


This is why need more regulations so companies can't pull this shit....

Benchmarking clause is unenforcible. It's like a clause by an auto manufacturer you can't track gas milage.

What you are supposed to just ignore how much gas you put in the tank? No different when your servers slow down and runs up your electric bills.


"Freedom of speech is a principle that supports the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation, censorship, or sanction." https://en.wikipedia.org/wiki/Freedom_of_speech

Phoronix already tested this: https://www.phoronix.com/scan.php?page=article&item=l1tf-ear... and https://www.phoronix.com/scan.php?page=article&item=l1tf-for....

The performance loss isn't that bad in most cases.


You might be right in "most" cases but looking at the benchmarks, there is quite a few cases which are used a lot in everday computing where performance literally hits rock bottom. Video rendering went from 600 frames per second down to 200 frames per second. That is a 60% or more performance loss.

Look at the default mitigation; the full one disables HT. Unless you're a cloud provider, the default is probably enough.

These seems to be linux's own mitigations against L1TF. There's no mention about microcode being updated here, which I assume isn't the case.

I assume the microcode update could be either equal o slightly worse in terms of performance, as the CPU might need to flush more frequently.

Which is pretty sad, as the status of kernel+microcode updates is quite confusing already. Some mitigations can take advantage of new microcode updates, if the kernel is recent enough. How does the pure soft-workaround compare in terms of performance to the microcode-assisted one?

Note that, combining all workarounds for meltdown+spectre-v1/v2+l1tf can have a significant performance hit for some workloads which are not purely cpu-bound. On top of that HT is now looking like a bad idea to start with.

I'm pretty sure that for a server where there's a lot of I/O and virtualization going on, enabling all the patches and workarounds + disabling HT can take a massive cut in overall throughput.


I assumed the tests were were ran on the new microcode version, as it was released a week before the tests, but it seems that Ubuntu didn't ship that update yet.

Their security advisory says that

> Optimized L1 data cache flushing is available via intel-microcode updates. The updated kernels implement a software fallback cache flushing mechanism for processors that have not received microcode updates.

It looks like the kernel will say "VMX: conditional cache flushes" on the new microcode so according to the Phoronix screenshots, they were running the older version.

Maybe we'll see a new set of benchmarks.


The linked articles state the testing was done with the mitigations provided in the latest kernel enabled. It does not say that the microcode updates were applied during the testing.

Also there was significant reduced performance when hyperthreading was disabled as required to fully mitigate the threat of the vulnerabilities. If the microcode update changes the behavoir of hyperthreading in order to fully mitigate the vulnerabilities without having to completely disable it then there is a chance these changes effect the performance benefit of hyperthreading.

Given Intel's anti-benchmark license clause without evidence otherwise I assume it is not just a chance but a strong likelihood that the performance benefits are significantly impacted.


It is sad to see Intel moving this way, but the Internet will find a way around this. As it did so often.

So what will these guys do?

https://www.cpubenchmark.net


Under EU legislations these rules is void and null. In addition you can’t put rules to stop scraping of databases. The only rules that does apply is patents and copyright if you want to redistribute/sell.

You are free to change, modify, reverse-engineer every product you can view or get access to.



The no publishing terms is pretty much null and void outside the USA. Many non-US countries have unfair contract laws that make certain contracts (EULA) illegal and unenforceable. So if you and your site are outside the USA and your country has the appropriate consumer protection laws, benchmark and publish away.

I would be certainly interested in the level of degraded performance.


Intel release a faulty product, and to get the fix you are required to sign up to additional terms and conditions. Is that even legal? If it is, it shouldn't be.

Does intel seriously think there won't be hundreds of people publishing benchmarks?

We live in a time where thepiratebay still thrives, Snowden did his think and wikileaks is a thing, and Intel thinks they'll stop someone publishing benchmarks.... of a cpu.... ?


I'm mildly surprised to do a quick search on this HN thread and not see the word "adhesion" appear. These microcode patches are for a significant design defect in a product that Intel already sold under both explicit and implicit promises and expectations of general functionality. The microcode patches aren't some luxury fully optional deal to make pretty patterns with case LEDs or something, they're a matter of critical safety and failing to install them might even expose downstream users to liability issues were the security problems to ever subsequently be exploited and then cause damage to a 3rd party (or even themselves, cybersecurity insurance is still a pretty new area but one of the few policies I've see had print I understood to be along the lines of "insured is expected to take reasonable precautions staying up to date with known patches"). Furthermore, Intel obviously presents no other options for users, using cryptographic signing to ensure that (even ignoring practical issues) there are zero other options to get microcode from.

At least from my layman's perspective this certainly smells like an absolutely 100% textbook contract of adhesion. It's a "take it or leave it" offer, there is no room for bargaining, the term is far outside of reasonable expectations for the situation, it's simply an entirely one-sided item for the pure benefit of Intel leveraged coercively. In fact I think it may go far enough to hit the doctrine of unconscionability even. All this without even touching on any public interest issues.

I think they should be challenged on this, that benchmarks should be published and Intel told to pound sand. Companies can stick whatever they want in a contract but that doesn't make it enforceable. And while admittedly often that can be quite gray territory and people toss out "not enforceable!!!" on the Internet far, far more often then is justified this particular instance really does look egregious. Yes, normally you can contract away your speech rights, but it's in the process of a real contract, with reasonably equal bargaining positions, proper consideration, etc. I think this goes too far.

Perhaps Intel's real aim though is actually slimier, lots of major review sites depend pretty heavily on access that Intel (and other vendors) offer as well as free/cheap kit and the like which is optional and much easier to yank away at a whim. For those publications this may be a shot across the bow, of the sort "sure, you can challenge this if you want but it's a warning that if you do we can still punish you for it regardless of you winning the case."


I'm not at yet convinced this was not simply user error in the legal department, or in the department that deployed the microcode to the website. The longer Intel is silent on the issue, the more likely this was their intent.

No benchmarking?

Just benchmark and publish results anonymously. Intel can't take them down since the results are not illegal, only the "act" is prohibited by the EULA, which is not enforceable anyways.

Intel should be seriously punished for trying to play like this.


Intel already made an important effort regarding being Open Source friendly. In my opinion they should keep playing the nice guy role, avoiding PR fiascos.

A total disaster. From all I know, any thorough fix of the exploits of speculative execution will slow down the processers significantly. Unless the terms are just a terribly blunder (which would be bad enough on its own), this points to a rather big marketing nightmare to show up. First of all the bad PR for those terms, and of course, sooner or later there will be thorough benchmarks on the net. I would even guess, one of the larger hardware review sites will openly ignore the terms and run them and wait for Intel to sue them. I could not imagine such a law suit achieving anything but directing even more attention to that publication and the obtained results.

So, has anyone done any comparison or benchmarks?

Back in 2000 or maybe 2001, Intel had software one could use to create an image (something analogous to Paint or Photoshop). If I recall correctly, the novelty was that it was online.

The license for that software was that Intel owned all rights to everything produced with it - your art was not your own.

My Google-fu is weak today. Does anybody remember the name/have a reference?


What's the over/under for seeing published benchmarks? Monday, 2018-Aug-27 1200 UTC seem reasonable?

These pernicious attempts to chip away at free speech through contractural riders is becoming more common and blatant.

The best way to combat this sort of thing is for journalists, and anyone else discussing the issues of this vulnerability, to make a point of bringing it up, repeatedly. Whenever performance is an issue, point out that Intel is restricting the impact from being evaluated, which suggests that it is bad. Whenever security is an issue, point out that Intel is restricting the dissemination of the mitigation by imposing self-serving conditions on its use.


> You will [..] not allow any third party to [..] publish or provide any Software benchmark or comparison test results

I read this as everyone that distributes this has to change THEIR ToS to explicitly disallow THEIR users to provide benchmarks. Surprised any distro distributes this at all.


I wonder what the folks over at top500.org will think of that?

So... anyone want to link to some benchmarks hosted somewhere Intel can't mess with? I have Tor Browser.

I'm sick of this kind of restrictions from anybody. They are obviously blind to the reputation damage they inflict on themselves.

Regardless, what would be a compelling reason for which I should buy from Intel again? What kind of credibility do they build for themselves?


Intel's poor handling of the Meltdown fiasco and now this tomfoolery, means they have lost me as a customer forever.

I'm on the AMD train now, never buying Intel again.


> I'm not blaming Intel for this, I don't know if Intel could have foreseen the problem.

The potential security issues with out of order processing were noted publicly at least as far back as 2007. The problem is that, on the whole, the entire industry doesn't really give a shit about security until it's too late, which is exactly the wrong time to start.

I don't foresee this changing anytime soon. It will be interesting to see the downside of AI. Or maybe terrifying is a better word for it, since it could foreseeably include personal physical security. But before that time comes, people who think like me are still just pointy-headed paranoid security losers, I reckon.


Nobody is thinking about security and AI. I wonder how long it will be before someone images a neural net embedded into some product and then figures out how to trick it into behaving in some way that attacks some internal part of the program hosting it. I'm imagining things like images than when shown to image classifiers cause the classifier neural network to buffer overflow the conventional program hosting it and inject shellcode.

Many types of neural networks are Turing-complete and are written in C by academics with no security experience. Fun times ahead.


I wish lawmakers would tackle these kind of ridiculous provisions in EULAs. When you purchase something, you own it, and you should be able to do what you want with it.

You shouldn't have to jump through absurd legal hurdles to use and talk about something you legally purchased. These aren't national security secrets for god's sake.


They do. The same EULA will have different binding in the US and in Europe, for example. In many EU countries companies cannot draw arbitrary EULAs. If some sections infringe on consumer rights, then these sections simply don't apply and cannot be enforced. In essence, Intel can write whatever they wish in the EULA, but the consumer can do whatever they want as well. Depending on consumer's jurisdiction, it can create problems or not. US judiciary system is particularly unpleasant to the regular Joe, but the rest of the world is not. Go figure.

Why not make it optional? I don't care about that level of security on most of my systems, I, and I hope the rest of you, have ways to mitigate most security issues like this anyhow. It's ridiculous.

Gag benchmarkers ? That is the Oracle way.

I'm willing to bet the license agreement in OEM firmware updates that also include the microcode patches, has this language.

Let the benchmarks roll in...

https://access.redhat.com/security/vulnerabilities/L1TF-perf

An estimation of man hours spent on this issue outside Intel

https://www.servethehome.com/intel-publishes-l1tf-and-foresh...

Thought frequently mentioned, Phoronix did not run a benchmark comparing before and after application of the microcode update. Excerpt: To note, no microcode changes/updates were made to the systems under test for this article, just testing/comparing the kernel patches (https://www.phoronix.com/scan.php?page=article&item=l1tf-for...)


> For companies like Google and Microsoft with the ability to get custom chips, and with custom schedulers that can ensure that VMs to not cross hyper-threading boundaries, this is something that can be relatively easily mitigated. For enterprise virtualization clouds, this may increase utilization of underutilized servers, and cause more server purchases in the future.

Is this net-positive for VMware since customers will be required to buy more licenses for the same workload?

Or is it net-negative because it makes public clouds more competitive?


What if I install the microcode then sell the CPU, then the buyer does a benchmark? Am I banned from selling the CPU after installing the microcode?

Earth is flat and the theory of evolution is bogus. You will not allow and will not allow any third party to publish or provide any benchmark or comparison test results. Done with Science.

I hope the Streisand Effect[1] makes them regret this decision.

[1] https://en.wikipedia.org/wiki/Streisand_effect


Would car manufacturers get away with this? "No emission tests allowed!"

This raises the question about how you sort null with respect to processor performance.

Given four benchmarks (7,null,8,6) where do you place the null?


Intel themselves have published before and after benchmarks, though.

https://www.intel.com/content/www/us/en/architecture-and-tec...


>No benchmarking allowed

If this isn't a smoking gun about performance loss due to vulns, I don't know what is. Intel is in hot water.


Response from Intel that includes a new version of the license is here:

https://twitter.com/imadsousou/status/1032680311753072640

Disclaimer: I work at Intel


For discussion...

Question: Are these license restrictions on right to disclose benchmarks enforceable?

Question: If they are enforceable, do licensors ever try to enforce them? If not, why?

A little background here: https://danluu.com/anon-benchmark/

For example, this has been posted to HN at least twice recently:

https://clemenswinter.com/2018/07/09/how-to-analyze-billions...

Question: Was the author subject to any restrictions on publication? If yes, did the author seek "permission" from the licensor to publish these findings?

Excerpts from some of the licenses:

2.2. 32 Bit Kdb+ Software Use Restrictions

(c) 32 Bit Kdb+ Software Evaluations. User shall not distribute or otherwise make available to any third party any report regarding the performance of the 32 Bit Kdb+ Software, 32 Bit Kdb+ Software benchmarks or any information from such a report unless User receives the express prior written consent of Kx to disseminate such report or information.

kdb+ on demand - Personal Edition [64-bit]

1.3 Kdb+ On Demand Software Performance. End User shall not distribute or otherwise make available to any third party any report regarding the performance of the Kdb+ On Demand Software, Kdb+ On Demand Software benchmarks or any information from such a report unless End User receives the express, prior written consent of Kx to disseminate such report or information.

This Kdb+ Software Academic Use License Agreement ("Agreement") is made between Kx Systems, Inc. ("Kx") and you, the University, or employee of the University ("End User") with respect to Kx's 64 bit Kdb+ Software and any related documentation that is made available to you in (jointly, the "Kdb+ Software"). You agree to use the Kdb+ Software under the terms and conditions set forth below. This Agreement is effective upon you clicking the "I agree" button below.

1. LISCENSE GRANTS [sic]

1.4 Kdb+ Software Evaluations. End User shall not distribute or otherwise make available to any third party any report regarding the performance of the Kdb+ Software, Kdb+ Software benchmarks or any information from such a report unless End User receives the express, prior written consent of Kx to disseminate such report or information.

Kdb+ software end-user agreement:

By accessing the Kdb+ Software via the Google platform, you are agreeing to be bound by these terms and conditions (which may be updated from time to time) and to the extent you are acting on behalf of a permitted organization that you have authority to act on their behalf and bind them to these terms and conditions.

You may not access the Kdb+ Software if you are a direct competitor of Kx.

4. Benchmark Test Results. User agrees not to disclose benchmark, test or performance information regarding the Kdb+ Software to any third party except as explicitly authorized by Kx in writing.


So, as expected, a colleague of mine pointed out—Imad Sousou from Intel's Open Source Technology Center clarified:

We have simplified the Intel license to make it easier to distribute CPU microcode updates and posted the new version here: http://bit.ly/2w9RjtM . As an active member of the open source community, we continue to welcome all feedback and thank the community


> I don’t know if Intel could have forseen the problem. Since some similar exploits have been discovered for AMD and ARM CPUs, the answer is probably “no”.

the answer is they probably both should have.

speculatively executing code that is time-sensitive to privileged data should have been caught. timing attacks on this level have been known for at least a decade. for that reason I don't quite believe that nobody at Intel (or AMD) was aware of the possibility of these attacks before anyone in the security industry published about it. they should have been more responsible instead of just waiting until it broke.

everything about this saga adds up to economics, business and production reasons why 1) not enough people were paid to look for these kind of problems, 2) the microcode developers that might have become aware of potential issues didn't have a good avenue to raise them, 3) there seems to have been NO proper roadmap whatsoever at both of these (rather large) companies for responsibly addressing, fixing and patching mistakes of this level. the whole response seems to be completely ad hoc, like it was some kind of one-in-a-million act-of-god thing that nobody could have foreseen.

it's not a super obscure bug. it's a side channel cache timing attack, the likes of which have been well-known for over a decade.

if Intel and AMD both thought, during the past decade, "well we know about side channel cache timing attacks now, and this is probably the worst they'll ever get", they don't know quite an important rule about security: exploits only get worse, never better. that inaction definitely doesn't fall under "could not have foreseen".


Legal | privacy