Have you ever watched a nature documentary where birds trying to attract mates have esquisite display of amazingly useless feathers? (what's the feather equivalent of foliage, hmm..)
No it wouldn’t, but the equivalent here would be that the rig with LEDs and multi color heat sinks (real and fake) gets purchased and propagates certain decorative meme choices with the product design team.
Was looking into building a new computer today, after 5 years with my current one... Saw that even the ram sticks comes with leds. I don't want a single light anywhere but that's not going to happen ¯\_(?)_/¯
I mean, it's not wasting that much of any of those. It's probably less wasteful to just put LEDs on all those motherboards than create separate product lines with and without the flair - if some decent majority of the kinds of people that build their own PCs (gamers) want them, then it's easier to save on all the tooling and design and so on to create a new "pro" line of motherboards just for the people that don't want LEDs when they can just use an opaque case.
It wastes a lot! Unintuitively, for the things that do all the heavy lifting, chips are incredibly power-efficient. The stuff on the side, anything that glows or moves (LEDs and fans) takes up a disproportionate ton of power.
I'm talking about the overhead involved in making the separate lines. But that's an interesting point - but counterintuitive as I don't need to build in coolers for the heat generated by LEDs. Do you have numbers on power usage?
> It wastes a lot! Unintuitively, for the things that do all the heavy lifting, chips are incredibly power-efficient. The stuff on the side, anything that glows or moves (LEDs and fans) takes up a disproportionate ton of power.
Not at all true. It's around 0.1-0.2 watt per LED. A RAM stick is going to have maybe a dozen at most, so that's an extra 1-2 watts. Similar for a motherboard. Maybe a fully binged out build is burning 10-20 watts on RGB, give or take.
That's comparable to the power usage of just the VRMs or the X570 chipset (which is 10-15w), to say nothing of the CPU or GPU itself (both of which commonly blow past 100W under loads)
LEDs are very, very efficient. They do not take a "disproportionate ton of power."
2W is a power budget for an entire single borad computer I can run a desktop on. So it's quite a lot in absolute terms for a led inside the case that I'll never see.
10-20W is 10-40% of idle power consumption, perhaps, for a workstation. So it wastes a lot even in relative terms.
You missed the "fully blinged out" part. If you just get RGB on the motherboard and RAM, the harder ones to avoid, it's going to well under 5W. More like 2W. This is a power cost that basically doesn't exist in the context of the entire system.
There’s no reason to not have a simple on/off mechanism (hw or sw). My 6-7 year old MB has some LED traces on the PCB for some reason. Luckily they are dim, static, and ASUS included an option to turn it off in the BIOS. Some other components on the market now don’t these “luxuries”. They are unreasonably bright, blinking, and can only be disabled with a soldering iron.
If you were the kind of person who gets annoyed by low levels of light (e.g., when trying to sleep) then you would've noticed by now that putting a light inside something with as many holes as a computer case does not block all the light.
(Also computer hardware vendors seem particularly fond of blue LEDs, which is the most annoying color to some people.)
Because it limits what I can get. Some light might be leaking out of whatever case I end up with. Why is it on everything? It's a waste. I didn't want it when I was 16, I don't want it when I'm 30. I can't be alone.
You don't have to turn them on? I get the feeling people don't realize they have control over not just the color and action, but whether they're on at all.
And yes, I mean RAM, GPU, motherboard, fans, etc advertised as "RGB". If it's the same price and includes LED, then for all purposes it doesn't have LED.
> I get the feeling people don't realize they have control over not just the color and action, but whether they're on at all.
Not a single vendor supports Linux. Luckily some things have been reverse engineered, but it's very far from "just turn it off". Even on windows it sucks if you're mixing brands. MSI Mobo, Asus GPU, Corsair RAM, gonna need 3 separate programs running at startup to control those LEDs.
The control softwares set the state and don't need to be installed beyond that. I downloaded the trial Win10 image for a VFIO VM, disabled all the lighting, and never thought about it again. I may choose not to use Windows, but I'm not going to overlook a tool because I'm "Linux-only".
I get that some object to non-open standards on principle, but RGB is not a serious impediment to availability or use. Looking, there are non-RGB versions of most current-gen hardware available from popular resellers, and there are plenty of OEM options that are RGB-free. What I don't have racked somewhere else is in solid 'silent' cases with as few moving parts as required, but I bought parts with minimal/no lighting anyway.
I certainly hope that everyone can find something amenable to their situation, but I also feel that objecting doesn't help when an hour's work or research beforehand can allieviate the cause.
That was pretty much my attitude then I wanted a high air flow case and switched to a Coolermaster H500 (2x200MM, 3x120mm noctua (two in the top, one on the back) with a glass side..and a tinted glass side.
The RGB stuff sorta grew on me and it's kind of interesting on Linux to be able to see the CPU fan stop spinning completely (never seems to happen on W10 though...).
People have been mentioning very practical reasons but I just wanted to add that for me, even if none of these practical reasons were to apply (and I also don’t think the practical reasons are very strong), it’s simply about aesthetics.
It’s extremely hard to build a PC that would fit my personal aesthetics and even if I can turn those off the mere thought of having to configure that and the mere thought of those LEDs existing at all offends me on a deep level.
You have limitless choices when it comes to building a PC but the aesthetics of the things you can get are awful and tiring and just … ugly.
"game cache" branding is stupid but it is L3 not just "more memory" and the amount that Ryzen 3rd gen has is actually significant. So much so that it hugely impacts some workloads like GCC compilation times, which are way, way faster on Ryzen 3rd gen than anything else thanks entirely to that huge amount of game cache, I mean L3 cache.
By far the worst PC component I have come across containing LEDs is Kingston's HyperX Fury RGB SSD [1]. It's a 2.5" SSD with 75 LEDs! These LEDs generate so much heat (70°C+) that the drive thermal throttles and either prevents the operating system from booting or causes significant performance issues [2].
May well not be unreasonable for a solid state component.
Also, keep in mind that MTBF is not expected lifetime or anything at all like that.
What a 1 million hour MTBF actually means is that if you had 1 million of these drives running at the same time, you'd expect one failure per hour. (Or similarly, if you had 1000 drives, you'd expect a failure every 1000 hours).
The mean rate of failures over time may not be constant, but it may have a mode for an expected life time window. This modal mean rate of failure is what's being described, I reckon.
It would be nice to know the time window, but I'd expect it to be somewhere in the 5 to 10 year range.
Those rates are surprisingly low considering that's spinning iron. I'd love to see a similar data set for SSDs.
Of course, it also depends on what you count as a failure. Is a drive that can no longer write but can still read a "failure"? Probably - but that's a considerably softer way to fail than a hard drive with a bad bearing or head.
A lot of SSDs that fail early, fail because of their firmware. That MTBF probably only covers the reliability of the flash chips but not the controller.
So my comment is 100% nitpick but I do want to make a point here.
X year failure rate divided by X is not the same as AFR. AFR will always be higher, because it has to compensate for each year's pool of potential failures shrinking.
The correct equation is already there, 1-e^-(n/114), and the correct number is .87%. And while that's only a percent off, it's lucky to only be a percent off. If MTBF was 113 years it would be .86 vs. .88, which is the difference between two digits of accuracy and one and a half digits of accuracy. Losing a quarter of your accuracy is not great!
LED's are the modern blinkenlights. They can be useful if the OS supports them properly (Linux is slowly but surely getting there), so that failure states or heavy load can be conveniently signaled in a way that doesn't impact actual, on-screen work.
I wouldn't mind components with a couple of tiny status LEDs, toggleable with a small dip switch or whatever. The issue is that manufacturers add these to simply get "rad-points."
It's worse if you get the inexpensive RGB stuff that doesn't link into some central control, and everything is pulsing on its own schedule. Source: on my last home server build I got a case with RGB and a power supply with RGB because it was the same cost with or without, and the thought of the server blinking away in the basement was amusing.
It's silly to pretend like this is a new thing, I and plenty of other teenagers (and non-teenagers) were putting car-kit neons in our custom-modded cases in 2000.
Thankfully my tastes have grown since then, but I can't blame kids today for doing the same thing I was doing (though I can grumble that they have fancy cases and led controllers and aio coolers where I had beige boxes and holesaws and custom wiring and fish tank gear).
However, the difference is now that we are effectively being forced to pay for features we do not want and which are a waste of material resources and power. Previously, you would have needed to get those extra bits yourself.
That VRM heatsink is the exact same as RGB on gamer boards - it's just there to look the part. It's not a good heatsink design at all. The purpose of it is mostly to "look professional" not to be non-sense function. If it was really function-first it'd be thin-fins with a bigger gap between them to aid in airflow between the fins.
I'd prefer a completely regular black anodised heatsink without any "design" but that's the closest any current board gets to it.
Supermicro server boards are about as utilitarian as you can get, with off the shelf heatsinks slapped on them, its only the consumer gear that wastes money on this stuff.
Despite being gaudy, what you are seeing are legitimate arrays of heat spreaders and sinks...these tend to be the on high performance mother boards, which often support good overclocking capacity.
Most of it is useless (I also prefer minimalist hardware). However the new x570 chipset for AMD motherboards requires active cooling as the TDP has increased from 5-7W to 11W in order to support PCIe 4.
the thing I'd like to see:
implementations in "ispc vs asm vs intrinsics_msvc vs intrinsics_clang vs intrinsics_gcc" on Intel and AMD CPUs. those "quake has now 300 FPS" things tell me exactly nothing. (read: i am more interrested in testing new features)
I'd just like to say for anyone considering one of these new processors, there are some downsides I wasn't expecting. Overall I'm fine with them, but knowledge is power so here you go:
1. RDRAND returns -1 (and sets the carry bit meaning it thinks -1 is a random number). Supposedly this will be fixed in a BIOS update (which makes me think it's an intended mode wrongly turned on... a backdoor?)
2. On linux, CPU monitoring (e.g. temperature) is not available as AMD hasn't published the specs on that yet
3. While it has spectre "mitigations" (stibp, ibpb, ssbd), spectre is not yet dead (spectre_v1, spectre_v2, spec_store_bypass).
5. PSP/SEV ccp driver calls hang (on my machine at least, might be the mobo) but luckily linux only waits a few seconds before giving up and continuing the boot process.
6. There have been reports of high idle voltages, but under linux w/o monitoring I have no idea if I'm affected or if the issue is real or significant.
For most use cases all of the above are minor and don't overshadow the price-performance success of these processors.
That's not how things are done. Practically every library that has optimized implementations for stuff selects the implementation at runtime based on CPUID.
Exactly, assuming optimized implementations in libraries, that's not always the case and binaries compiled just exactly for the one CPU tend to perform better.
Don't know why the downvotes, do people think optimized implementations utilizing certain CPU features or combinations of them are somehow super common?
People think that because it is correct. Any serious crypto library (libsodium, OpenSSL, crypto++, cryptlib, ...) that comes to mind uses this approach, codecs use it, your C standard library uses it etc.
"One advantage of AMD over Intel is AMD included SHA hardware acceleration on every Ryzen processor."
What do you think is more common? People rolling their own SHA implementation or people calling <insert crypto library>'s SHA256()? The former will barely get any sort of optimization, regardless of compiler flags, the latter will automatically use the SHA extensions.
You indeed have a point about SHA extensions and people predominantly using a good crypto library, I guess I should've been more verbose that I meant new extensions in-general being a bit underutilized.
I actually wrote up about this back in 2017 [0]; the general feeling in the community at the time was that it would be “right around the corner” from Intel, but that hasn’t really happened.
>1. RDRAND returns -1 (and sets the carry bit meaning it thinks -1 is a random number). Supposedly this will be fixed in a BIOS update (which makes me think it's an intended mode wrongly turned on... a backdoor?)
Yet more evidence not to trust hardware random number generators.
I'm very surprised that something like that even makes it into production. You'd think processor manufacturers would have an extensive set of test cases, at least one for each instruction...
I remember reading somewhere that, for a long time, Intel's "test cases" basically consisted of all the operating systems they could find, going back to MS-DOS 1.0 and including some more obscured ones, as well as tons of other third-party software.
Unfortunately it seems this new generation no longer follows the "NEVER break backwards compatibility" principle that made the PC such a great platform, and as a result we're seeing hardware/software/firmware that's increasingly buggy and divergent. It used to be the case that if a new processor couldn't run existing software (even that which depended on undocumented but constant behaviour) unmodified, it was a fatal flaw.
They have way, way more than test cases. The logic synthesis, simulation, verification, and validation capabilities of top tier EDA tools are incredible.
The problem is as it always is with these kinds of things. They help assure that you can measure your model and your implementation of that model. They're much less capable of determining you've got the right model to begin with. That part is still up to the humans.
The state of commonplace capabilities for testing and verification of software is extremely far behind that of hardware. For a variety of economic, cultural, and use case incentives reasons.
Everything on a CPU is subject to heavy validation .. but I bet the random number generator is exempt, because by definition it's not deterministic. It's more like a peripheral "listening" to randomness.
> "The RNG uses 16 separate ring oscillator chains as a noise source. Each chain consists of a
different prime number of inverters in a free-running configuration which capture natural clock jitter"
So it's (deliberately) dependent on natural manufacturing variation in the silicon, and in any deterministic simulator won't produce random numbers. It also takes a while to start; it's possible that the startup conditions are bad such that "a while" turns out to be "forever".
Debian buster VM on an old CPU without RDRAND takes about 5 minutes for the kernel crng init to complete and for sshd to start accepting connections. Stracing the sshd process shows that it's indeed blocked on `getrandom()`.
In some ways, it's very much WONTFIX because something like sshd might actually need the entropy. I'm glad that systemd has proper workarounds implemented so that the machine actually boots out of the box, though.
I still believe we shall never relies on rdrand because we cannot trust the hardware RNG implementation. For me, anybody who use rdrand a idiot unless it's a video game. I said "reasonable" for their argument. I do not agree with them.
Eagerly awaiting 3-rd gen Threadripper. That CPU will completely obliterate Intel in the HEDT/Workstation market. Best I can tell, short of moving AVX512 (along with, e.g. VNNI) into their HEDT chips (which is not going to happen), Intel has nothing to counter this threat.
Looks like you're right. The pricing on those chips is pretty outrageous at the moment though.
MKL and MKL-DNN take advantage of AVX512 and VNNI automatically, if available. Lots of scientific software (basically all of it that cares about CPU perf) use MKL and MKL-DNN.
Simply having AVX512 support doesn't really tell you the whole story, though. You'd want to actually benchmark it and see how much it's helping and then how it compares to the competition. AVX512 comes with an insane clock speed penalty, so while it can still put up some really impressive numbers, mixed workloads can also end up not any faster (or even slower).
Yep I've heard this as well. More often than not it's faster to avoid AVX512 on recent Xeons. I don't think the same is true for Xeon Phi but those are going away.
Those are significant single thread gains. It is concerning to me that BIOS firmware has such a large influence over CPU performance. Buy from motherboard manufacturers that are known to emphasize BIOS quality and performance.
There was also more performance left on the table in the initial AnandTech tests on 7/7; conservative DDR-3200 JEDEC memory timing was used throughout. More aggressive memory profiles will improve performance and AMD CPUs are known to benefit greatly from better RAM performance.
So despite the already good results of Ryzen Gen 3 we can expect better performance as the platform is optimized.
> It is concerning to me that BIOS firmware has such a large influence over CPU performance.
This is probably related to the AGESA blob which sets up the power and frequency controls. These come from AMD, and as long as your vendor is reasonably prompt, you should be ok. When other vendors update for AGESA, they'll include that in the notes, so you can pester your vendor for an update.
They just wrote the reasoning why: because the firmware affects the system so much.
As for the suggestion, that’s what I’ve been struggling with too. The gaming motherboards popular for self assembly seem to be getting better at updating the bios, but the manuals I’ve read still recommend against updating the firmware. Which makes me guess you’d want to wait for the PlayStation 5 or the next Xbox, or a PC vendor that is associated with fwupd, as that’s likely a good indicator that they care about firmware.
Sorry, by reasoning I meant the reason why you consider a particular vendor worthy of being associated with quality and performance. The internet currently seems lacking in anything approaching a longitudinal study of hardware vendors - it’s all a sea of opinions.
Suggestion; purchase from manufacturers that have shown they are responsive to enthusiasts. Are responsive representatives actively monitoring customer feedback? Otherwise you may find yourself waiting long periods (or forever) on an indifferent manufacturer.
Why; The Gen 3 Ryzen is a very frequency agile design. Performance depends strongly on how rapidly core frequency is adapted to load. Frequency is governed by firmware, so optimal firmware is crucial.
Someone else speculated that the relevant firmware is the AMD provided "AGESA blob." Whatever it's called you're relying on the board vendor diligently bundling the latest version in a timely manner and not dragging their feet when there are issues with newer versions. Also, power supply is a function of board design and I wouldn't be surprised to learn of feedback mechanisms that afforded the board designer some degree of control; given the cost of high performance capacitors this actually seems pretty likely. If so then merely updating the blob may not be enough ensure optimum results.
In any case I suspect the firmware that shipped on 7/7 is far from optimal given the pressure to deliver, so pursuing newer firmware will likely have significant ongoing pay offs.
Like previous Ryzen lineups the entire line has unbuffered ECC support. Whether or not it's officially supported will depend most on the motherboard & bios and if they tested & validated it.
I have multiple Kaby Lake boards which haven't received any BIOS updates in > a year. It depends upon the model line and as far as I can tell they will not tell you how long support is before you buy one.
What are you expecting an updated BIOS to do for that platform? Kaby Lake was the last CPU line for that chipset series since Coffee Lake shuffled some pins around. So what are you wanting updated that Asus hasn't provided?
There are issues with the BIOS on on of the systems randomly changing the m.2 PCIe speed between 4x and 2x. It only exists in the most current version of the BIOS but doesn't happen on older version.
Their are also other issues that should be fixed but haven't.
For what it's worth, Asus has provided excellent support for the PRIME X370-PRO motherboard. It was a bit of a gamble when I purchased a PowerSpec Ryzen system from Micro Center, but Asus has released regular BIOS updates, including support for higher-speed memory (e.g. I'm running 32 GB @ 3200MHz, Corsair Vengeance chips).
My only complaint would be the lack of detailed change logs, which are often only a single bullet point.
All ASRock AMD boards (or nearly all, but I haven't found any exceptions) support ECC memory.
I'm not sure about Asus' X570 line since they market a board with explicit ECC support, but their lower-end and mid-tier X470 boards supported ECC. Their higher end boards -- curiously -- did not.
You can generally tell if a board supports ECC by looking at the board's manual. If it supports ECC, it will be mentioned.
Intra-CCX latency is now awesome, and all inter-CCX (including intra-CCD) latency is not bad (and goes through IF) - what's interesting is that intra vs inter-CCD latency seems to be within a ns or two. Pretty impressive (considering it needs to bounce through the cIOD)!
reply