Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
AMD Ryzen 3000 Post-Review BIOS Update Recap: Larger ST Gains, Some Gains/Losses (www.anandtech.com) similar stories update story
145.0 points by rbanffy | karma 158565 | avg karma 2.97 2019-07-12 11:23:00+00:00 | hide | past | favorite | 137 comments



view as:

What's with all that crap they stuck on the board. You can practically get a epileptic fit from it.

It's practically impossible to get a consumer motherboard without LEDs and huge chunks of milled aluminium on it. Don't get it either...

Have you ever watched a nature documentary where birds trying to attract mates have esquisite display of amazingly useless feathers? (what's the feather equivalent of foliage, hmm..)

plumage

Yes thanks! It was at the tip of my tongue

Ha ha, but having a RGB gaming rig isn’t going to get you laid. Usually the opposite.

No it wouldn’t, but the equivalent here would be that the rig with LEDs and multi color heat sinks (real and fake) gets purchased and propagates certain decorative meme choices with the product design team.

At least birds have good taste.

Was looking into building a new computer today, after 5 years with my current one... Saw that even the ram sticks comes with leds. I don't want a single light anywhere but that's not going to happen ¯\_(?)_/¯

Why do you care? Those lights are inside case. If you don't get case with glass, you won't ever see it.

It's a waste of metal, plastic, and electricity. Useless on every single aspect you can think of, except for the hardcore gamer pandering bullshit.

I mean, it's not wasting that much of any of those. It's probably less wasteful to just put LEDs on all those motherboards than create separate product lines with and without the flair - if some decent majority of the kinds of people that build their own PCs (gamers) want them, then it's easier to save on all the tooling and design and so on to create a new "pro" line of motherboards just for the people that don't want LEDs when they can just use an opaque case.

It wastes a lot! Unintuitively, for the things that do all the heavy lifting, chips are incredibly power-efficient. The stuff on the side, anything that glows or moves (LEDs and fans) takes up a disproportionate ton of power.

I don't like that X570 needs a chipset fan, but at least cooling's useful. I can't have as much charity toward LEDs. Although I do like 'em. The æsthetic is sick. Although not this one: https://smile.amazon.com/G-Skill-Trident-3600Mhz-PC4-28800-C...


I saw this design on Youtube, it has to be the most senseless and tasteless thing ever sold in computer hardware.

I'm talking about the overhead involved in making the separate lines. But that's an interesting point - but counterintuitive as I don't need to build in coolers for the heat generated by LEDs. Do you have numbers on power usage?

> It wastes a lot! Unintuitively, for the things that do all the heavy lifting, chips are incredibly power-efficient. The stuff on the side, anything that glows or moves (LEDs and fans) takes up a disproportionate ton of power.

Not at all true. It's around 0.1-0.2 watt per LED. A RAM stick is going to have maybe a dozen at most, so that's an extra 1-2 watts. Similar for a motherboard. Maybe a fully binged out build is burning 10-20 watts on RGB, give or take.

That's comparable to the power usage of just the VRMs or the X570 chipset (which is 10-15w), to say nothing of the CPU or GPU itself (both of which commonly blow past 100W under loads)

LEDs are very, very efficient. They do not take a "disproportionate ton of power."


2W is a power budget for an entire single borad computer I can run a desktop on. So it's quite a lot in absolute terms for a led inside the case that I'll never see.

10-20W is 10-40% of idle power consumption, perhaps, for a workstation. So it wastes a lot even in relative terms.


> So it's quite a lot in absolute terms for a led inside the case that I'll never see.

Then turn them off? The whole point of RGB lighting is it's controllable. That includes setting it to off.


Eh, point is 10-20W for a few blinkies is not efficient by any means.

You missed the "fully blinged out" part. If you just get RGB on the motherboard and RAM, the harder ones to avoid, it's going to well under 5W. More like 2W. This is a power cost that basically doesn't exist in the context of the entire system.

There’s no reason to not have a simple on/off mechanism (hw or sw). My 6-7 year old MB has some LED traces on the PCB for some reason. Luckily they are dim, static, and ASUS included an option to turn it off in the BIOS. Some other components on the market now don’t these “luxuries”. They are unreasonably bright, blinking, and can only be disabled with a soldering iron.

Some people like a bit of "bling" mate. Calm down.

Alas you can. Blinking LEDs are visually distracting even through fan slots etc

If you were the kind of person who gets annoyed by low levels of light (e.g., when trying to sleep) then you would've noticed by now that putting a light inside something with as many holes as a computer case does not block all the light.

(Also computer hardware vendors seem particularly fond of blue LEDs, which is the most annoying color to some people.)


For some time in the 90s blue leds were though of as premium (they were certainly sold at premium prices), because they were harder to make.

Because it limits what I can get. Some light might be leaking out of whatever case I end up with. Why is it on everything? It's a waste. I didn't want it when I was 16, I don't want it when I'm 30. I can't be alone.

You don't have to turn them on? I get the feeling people don't realize they have control over not just the color and action, but whether they're on at all.

And yes, I mean RAM, GPU, motherboard, fans, etc advertised as "RGB". If it's the same price and includes LED, then for all purposes it doesn't have LED.


As long as I can do it easy in the bios menu and it's not some windows only software that's fine. But still unnecessary which always will annoy me.

> I get the feeling people don't realize they have control over not just the color and action, but whether they're on at all.

Not a single vendor supports Linux. Luckily some things have been reverse engineered, but it's very far from "just turn it off". Even on windows it sucks if you're mixing brands. MSI Mobo, Asus GPU, Corsair RAM, gonna need 3 separate programs running at startup to control those LEDs.


The control softwares set the state and don't need to be installed beyond that. I downloaded the trial Win10 image for a VFIO VM, disabled all the lighting, and never thought about it again. I may choose not to use Windows, but I'm not going to overlook a tool because I'm "Linux-only".

I get that some object to non-open standards on principle, but RGB is not a serious impediment to availability or use. Looking, there are non-RGB versions of most current-gen hardware available from popular resellers, and there are plenty of OEM options that are RGB-free. What I don't have racked somewhere else is in solid 'silent' cases with as few moving parts as required, but I bought parts with minimal/no lighting anyway.

I certainly hope that everyone can find something amenable to their situation, but I also feel that objecting doesn't help when an hour's work or research beforehand can allieviate the cause.


That was pretty much my attitude then I wanted a high air flow case and switched to a Coolermaster H500 (2x200MM, 3x120mm noctua (two in the top, one on the back) with a glass side..and a tinted glass side.

The RGB stuff sorta grew on me and it's kind of interesting on Linux to be able to see the CPU fan stop spinning completely (never seems to happen on W10 though...).


People have been mentioning very practical reasons but I just wanted to add that for me, even if none of these practical reasons were to apply (and I also don’t think the practical reasons are very strong), it’s simply about aesthetics.

It’s extremely hard to build a PC that would fit my personal aesthetics and even if I can turn those off the mere thought of having to configure that and the mere thought of those LEDs existing at all offends me on a deep level.

You have limitless choices when it comes to building a PC but the aesthetics of the things you can get are awful and tiring and just … ugly.


you can get all the things without leds, and/or don’t plug the rgb cables in.

Yes. But less stuff to pick between in every price range.

LED's can't add to much to the price can they? Although I'd also like an option not to have extra electronics bolted on if I don't want them.

thats not the point

the point is to market the premium RGB'd hardware as some game specific hardware to kids that think "game cache" is anything but more memory


"game cache" branding is stupid but it is L3 not just "more memory" and the amount that Ryzen 3rd gen has is actually significant. So much so that it hugely impacts some workloads like GCC compilation times, which are way, way faster on Ryzen 3rd gen than anything else thanks entirely to that huge amount of game cache, I mean L3 cache.

Don't think it adds at all on the most common pieces that has them, so you can't even get away cheaper and happier without!

They add very little to the manufacturing cost but often quite a lot to the end user price. Corsair sells $40 RGB fans.

By far the worst PC component I have come across containing LEDs is Kingston's HyperX Fury RGB SSD [1]. It's a 2.5" SSD with 75 LEDs! These LEDs generate so much heat (70°C+) that the drive thermal throttles and either prevents the operating system from booting or causes significant performance issues [2].

[1] https://www.hyperxgaming.com/us/storage/fury-rgb-ssd

[2] https://youtu.be/vnST5rA64Oc


Life expectancy 1 million hours MTBF

Garish LEDs aside, marketing that sort of blatantly false exaggerated spec should be illegal. 1M hours is over 100 years.


May well not be unreasonable for a solid state component.

Also, keep in mind that MTBF is not expected lifetime or anything at all like that.

What a 1 million hour MTBF actually means is that if you had 1 million of these drives running at the same time, you'd expect one failure per hour. (Or similarly, if you had 1000 drives, you'd expect a failure every 1000 hours).


Following your logic:

100 drives - failure every 10,000 hours

10 drives - failure every 100,000 hours

1 drive - failure every 1,000,000 hours...

How is this not expected lifetime?

[edit: split lines]


The mean rate of failures over time may not be constant, but it may have a mode for an expected life time window. This modal mean rate of failure is what's being described, I reckon.

It would be nice to know the time window, but I'd expect it to be somewhere in the 5 to 10 year range.


For a 5-year life span 1M hours MTBF (114 years) mean the probability of an error for a single drive is about 4.3% (1-e^-(5/114)).

That equals 0.86% Annual Failure Rate (AFR).

Backblaze regularly provides their real-world-usage drive stats: https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1...

Seagate announced they would use AFR instead of MTBF in the future: https://www.seagate.com/de/de/support/kb/hard-disk-drive-rel...


Those rates are surprisingly low considering that's spinning iron. I'd love to see a similar data set for SSDs.

Of course, it also depends on what you count as a failure. Is a drive that can no longer write but can still read a "failure"? Probably - but that's a considerably softer way to fail than a hard drive with a bad bearing or head.


A lot of SSDs that fail early, fail because of their firmware. That MTBF probably only covers the reliability of the flash chips but not the controller.

> 0.86%

So my comment is 100% nitpick but I do want to make a point here.

X year failure rate divided by X is not the same as AFR. AFR will always be higher, because it has to compensate for each year's pool of potential failures shrinking.

The correct equation is already there, 1-e^-(n/114), and the correct number is .87%. And while that's only a percent off, it's lucky to only be a percent off. If MTBF was 113 years it would be .86 vs. .88, which is the difference between two digits of accuracy and one and a half digits of accuracy. Losing a quarter of your accuracy is not great!


a lot motherboards allow to disable the leds, nvidia cards usually allow this too but the software is often laggy and doesnt work

They shouldn't be there in the first place.

Last I checked I could only control nvidia GPU LEDs on Windows.

LED's are the modern blinkenlights. They can be useful if the OS supports them properly (Linux is slowly but surely getting there), so that failure states or heavy load can be conveniently signaled in a way that doesn't impact actual, on-screen work.

I suddenly want to write a kdump helper which allows you to extract the crashed kernel memory via the connected LEDs.

I wouldn't mind components with a couple of tiny status LEDs, toggleable with a small dip switch or whatever. The issue is that manufacturers add these to simply get "rad-points."

Initially I complained about RGB lighting of my mouse, but now it is used for CPU load and RAM usage, and it's pretty cool.

AsRock has "PRO" AM4 motherboards in black/grey colors.

http://in-win.com/ makes quality cases without windows.


#GAMER aesthestic requires loud designs and RGB RAINBOW all the things

its gross


Apparently the “huge chunks of milled aluminum” are required because PCIE 4 takes a lot more power to run than PCIE 3 did...

It's worse if you get the inexpensive RGB stuff that doesn't link into some central control, and everything is pulsing on its own schedule. Source: on my last home server build I got a case with RGB and a power supply with RGB because it was the same cost with or without, and the thought of the server blinking away in the basement was amusing.

Yeah this RGB-everything trend in PC building is how I know I’m getting old.

It's silly to pretend like this is a new thing, I and plenty of other teenagers (and non-teenagers) were putting car-kit neons in our custom-modded cases in 2000.

Thankfully my tastes have grown since then, but I can't blame kids today for doing the same thing I was doing (though I can grumble that they have fancy cases and led controllers and aio coolers where I had beige boxes and holesaws and custom wiring and fish tank gear).


It was tasteless and tacky then as well.

However, the difference is now that we are effectively being forced to pay for features we do not want and which are a waste of material resources and power. Previously, you would have needed to get those extra bits yourself.


Can we get an OEM with taste that just manufactures matte black GPU's and matte black motherboards?

I've seen only one tidy motherboard for these new chips and its from Asus.

https://www.asus.com/Motherboards/Pro-WS-X570-ACE/

Look at that, a normal looking VRM & chipset heatsink, and somehow they've resisted the urge to stick a fake plastic gun magazine onto the board!


TFA mentions the MSI X570-A Pro. They're just showing the gamer version because the pro one broke.

That VRM heatsink is the exact same as RGB on gamer boards - it's just there to look the part. It's not a good heatsink design at all. The purpose of it is mostly to "look professional" not to be non-sense function. If it was really function-first it'd be thin-fins with a bigger gap between them to aid in airflow between the fins.

I'd prefer a completely regular black anodised heatsink without any "design" but that's the closest any current board gets to it.

Supermicro server boards are about as utilitarian as you can get, with off the shelf heatsinks slapped on them, its only the consumer gear that wastes money on this stuff.


Not really. For some young gamers, the novelty has worn off.

Despite being gaudy, what you are seeing are legitimate arrays of heat spreaders and sinks...these tend to be the on high performance mother boards, which often support good overclocking capacity.

>legitimate arrays of heat spreaders and sinks

ah yes. The fabled colourful plastic heat sinks.

Why bother with copper when plastic with flashy graphics dissipates heat just as well.


When IBM called it (IIRC) light path diagnostics we were grateful: open box, see problem. In color!

Most of it is useless (I also prefer minimalist hardware). However the new x570 chipset for AMD motherboards requires active cooling as the TDP has increased from 5-7W to 11W in order to support PCIe 4.

https://www.anandtech.com/show/14161/the-amd-x570-motherboar...


It appeals to the same tastes which cause every component to have an X in the model name...

the thing I'd like to see: implementations in "ispc vs asm vs intrinsics_msvc vs intrinsics_clang vs intrinsics_gcc" on Intel and AMD CPUs. those "quake has now 300 FPS" things tell me exactly nothing. (read: i am more interrested in testing new features)

I'd just like to say for anyone considering one of these new processors, there are some downsides I wasn't expecting. Overall I'm fine with them, but knowledge is power so here you go:

1. RDRAND returns -1 (and sets the carry bit meaning it thinks -1 is a random number). Supposedly this will be fixed in a BIOS update (which makes me think it's an intended mode wrongly turned on... a backdoor?)

2. On linux, CPU monitoring (e.g. temperature) is not available as AMD hasn't published the specs on that yet

3. While it has spectre "mitigations" (stibp, ibpb, ssbd), spectre is not yet dead (spectre_v1, spectre_v2, spec_store_bypass).

4. There is a bug/oddity in how it deals with %fs: https://www.phoronix.com/scan.php?page=news_item&px=DragonFl...

5. PSP/SEV ccp driver calls hang (on my machine at least, might be the mobo) but luckily linux only waits a few seconds before giving up and continuing the boot process.

6. There have been reports of high idle voltages, but under linux w/o monitoring I have no idea if I'm affected or if the issue is real or significant.

For most use cases all of the above are minor and don't overshadow the price-performance success of these processors.


The high idle voltage was explained on reddit by AMD — The final word on idle voltages for 3rd Gen ryzen https://reddit.com/r/Amd/comments/cbls9g/the_final_word_on_i...

RDRAND fix from AMD is apparently out to motherboard manufacturers already. Hopefully it doesn't take them long to release an update.

https://www.phoronix.com/scan.php?page=news_item&px=AMD-Rele...


Thanks for the info; I'd be interested to see a /proc/cpuinfo (just one core) from a Linux from one of these beasts. I've not seen any published.

  processor : 15
  vendor_id : AuthenticAMD
  cpu family : 23
  model  : 113
  model name : AMD Ryzen 7 3700X 8-Core Processor
  stepping : 0
  microcode : 0x8701013
  cpu MHz  : 1863.008
  cache size : 512 KB
  physical id : 0
  siblings : 16
  core id  : 7
  cpu cores : 8
  apicid  : 15
  initial apicid : 15
  fpu  : yes
  fpu_exception : yes
  cpuid level : 16
  wp  : yes
  flags  : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
  bugs  : sysret_ss_attrs spectre_v1 spectre_v2 spec_store_bypass
  bogomips : 7189.14
  TLB size : 3072 4K pages
  clflush size : 64
  cache_alignment : 64
  address sizes : 43 bits physical, 48 bits virtual
  power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]

Thanks! Lots of flags....

One advantage of AMD over Intel is AMD included SHA hardware acceleration on every Ryzen processor.

SHA hardware acceleration is more likely to be used in SSH and some deduplication set ups, as well as git. It isn’t that much of a plus.

A lot of software is compiled for very generic CPUs, I wouldn't be surprised if it's not used in many cases.

That's not how things are done. Practically every library that has optimized implementations for stuff selects the implementation at runtime based on CPUID.

Exactly, assuming optimized implementations in libraries, that's not always the case and binaries compiled just exactly for the one CPU tend to perform better.

Don't know why the downvotes, do people think optimized implementations utilizing certain CPU features or combinations of them are somehow super common?

People think that because it is correct. Any serious crypto library (libsodium, OpenSSL, crypto++, cryptlib, ...) that comes to mind uses this approach, codecs use it, your C standard library uses it etc.

But there's many other libraries out there than just those though?

"One advantage of AMD over Intel is AMD included SHA hardware acceleration on every Ryzen processor."

What do you think is more common? People rolling their own SHA implementation or people calling <insert crypto library>'s SHA256()? The former will barely get any sort of optimization, regardless of compiler flags, the latter will automatically use the SHA extensions.


You indeed have a point about SHA extensions and people predominantly using a good crypto library, I guess I should've been more verbose that I meant new extensions in-general being a bit underutilized.

I actually wrote up about this back in 2017 [0]; the general feeling in the community at the time was that it would be “right around the corner” from Intel, but that hasn’t really happened.

[0]: https://neosmart.net/blog/2017/will-amds-ryzen-finally-bring...


>1. RDRAND returns -1 (and sets the carry bit meaning it thinks -1 is a random number). Supposedly this will be fixed in a BIOS update (which makes me think it's an intended mode wrongly turned on... a backdoor?)

Yet more evidence not to trust hardware random number generators.


I'm very surprised that something like that even makes it into production. You'd think processor manufacturers would have an extensive set of test cases, at least one for each instruction...

I remember reading somewhere that, for a long time, Intel's "test cases" basically consisted of all the operating systems they could find, going back to MS-DOS 1.0 and including some more obscured ones, as well as tons of other third-party software.

Unfortunately it seems this new generation no longer follows the "NEVER break backwards compatibility" principle that made the PC such a great platform, and as a result we're seeing hardware/software/firmware that's increasingly buggy and divergent. It used to be the case that if a new processor couldn't run existing software (even that which depended on undocumented but constant behaviour) unmodified, it was a fatal flaw.


They have way, way more than test cases. The logic synthesis, simulation, verification, and validation capabilities of top tier EDA tools are incredible.

The problem is as it always is with these kinds of things. They help assure that you can measure your model and your implementation of that model. They're much less capable of determining you've got the right model to begin with. That part is still up to the humans.

The state of commonplace capabilities for testing and verification of software is extremely far behind that of hardware. For a variety of economic, cultural, and use case incentives reasons.


Everything on a CPU is subject to heavy validation .. but I bet the random number generator is exempt, because by definition it's not deterministic. It's more like a peripheral "listening" to randomness.

https://www.amd.com/system/files/TechDocs/amd-random-number-...

> "The RNG uses 16 separate ring oscillator chains as a noise source. Each chain consists of a different prime number of inverters in a free-running configuration which capture natural clock jitter"

So it's (deliberately) dependent on natural manufacturing variation in the silicon, and in any deterministic simulator won't produce random numbers. It also takes a while to start; it's possible that the startup conditions are bad such that "a while" turns out to be "forever".


Probably still worth checking that it doesn't return -1 every time.

Infinity of -1's is a valid random sequence, just saying.

This should teach systemd developers and other idiots who blindly use rdrand a lesson not to trust it.

Have you read systemd's code comments regarding rdrand? It's quite a reasonable explanation

I have just read it. Well I have to say it's quite a reasonable.

https://github.com/systemd/systemd/blob/f90bcf8679e870878829...


"In such an environment using getrandom() synchronously means we'd block the entire system boot-up until the pool is initialized, i.e. very long."

How long is very long? 100ms, or 10s?


"It is very common that initialization of the random pool takes a longer time (up to many minutes..."

And it's my experience, too. It's one reason to install something like haveged. I had boot delays in the range of minutes, without it.


Debian buster VM on an old CPU without RDRAND takes about 5 minutes for the kernel crng init to complete and for sshd to start accepting connections. Stracing the sshd process shows that it's indeed blocked on `getrandom()`.

In some ways, it's very much WONTFIX because something like sshd might actually need the entropy. I'm glad that systemd has proper workarounds implemented so that the machine actually boots out of the box, though.


> So, you are a "security researcher", and you wonder why we bother

Oh, the art of insulting the audience in the first line of a message intended for them. And it ends up being the person you'd expect to write it.


First you call people idiots and then you say it’s reasonable without apology. Classic.

I still believe we shall never relies on rdrand because we cannot trust the hardware RNG implementation. For me, anybody who use rdrand a idiot unless it's a video game. I said "reasonable" for their argument. I do not agree with them.

Can you please follow the site guidelines when posting here?

https://news.ycombinator.com/newsguidelines.html


Zen 1 (Naples) had all kinds of problems including that frequency scaling would hang the machine. It seems wise to not buy the VERY first Zen 2 rig.

> don't overshadow the price-performance success of these processors.

The choice is easy: pay more for intel and 'It Just Works (tm)' or pay less for amd and fight with issues.

And it's great.


> 'It Just Works

Well, apart from the major cloud-breaking side-channel security issues.


I was wrong about point 2 being a spec issue. Linux just needed to be told "yes, this is one of those": https://github.com/groeck/k10temp/issues/12

Eagerly awaiting 3-rd gen Threadripper. That CPU will completely obliterate Intel in the HEDT/Workstation market. Best I can tell, short of moving AVX512 (along with, e.g. VNNI) into their HEDT chips (which is not going to happen), Intel has nothing to counter this threat.

Skylake-X already has AVX512 and Cascade Lake-X will have VNNI. Few workstation apps use them, though.

Looks like you're right. The pricing on those chips is pretty outrageous at the moment though.

MKL and MKL-DNN take advantage of AVX512 and VNNI automatically, if available. Lots of scientific software (basically all of it that cares about CPU perf) use MKL and MKL-DNN.


Simply having AVX512 support doesn't really tell you the whole story, though. You'd want to actually benchmark it and see how much it's helping and then how it compares to the competition. AVX512 comes with an insane clock speed penalty, so while it can still put up some really impressive numbers, mixed workloads can also end up not any faster (or even slower).

Yep I've heard this as well. More often than not it's faster to avoid AVX512 on recent Xeons. I don't think the same is true for Xeon Phi but those are going away.

Those are significant single thread gains. It is concerning to me that BIOS firmware has such a large influence over CPU performance. Buy from motherboard manufacturers that are known to emphasize BIOS quality and performance.

There was also more performance left on the table in the initial AnandTech tests on 7/7; conservative DDR-3200 JEDEC memory timing was used throughout. More aggressive memory profiles will improve performance and AMD CPUs are known to benefit greatly from better RAM performance.

So despite the already good results of Ryzen Gen 3 we can expect better performance as the platform is optimized.


> It is concerning to me that BIOS firmware has such a large influence over CPU performance.

This is probably related to the AGESA blob which sets up the power and frequency controls. These come from AMD, and as long as your vendor is reasonably prompt, you should be ok. When other vendors update for AGESA, they'll include that in the notes, so you can pester your vendor for an update.


> Buy from motherboard manufacturers that are known to emphasize BIOS quality and performance.

Any suggestions, and reasoning why?


They just wrote the reasoning why: because the firmware affects the system so much.

As for the suggestion, that’s what I’ve been struggling with too. The gaming motherboards popular for self assembly seem to be getting better at updating the bios, but the manuals I’ve read still recommend against updating the firmware. Which makes me guess you’d want to wait for the PlayStation 5 or the next Xbox, or a PC vendor that is associated with fwupd, as that’s likely a good indicator that they care about firmware.


Sorry, by reasoning I meant the reason why you consider a particular vendor worthy of being associated with quality and performance. The internet currently seems lacking in anything approaching a longitudinal study of hardware vendors - it’s all a sea of opinions.

Suggestion; purchase from manufacturers that have shown they are responsive to enthusiasts. Are responsive representatives actively monitoring customer feedback? Otherwise you may find yourself waiting long periods (or forever) on an indifferent manufacturer.

Why; The Gen 3 Ryzen is a very frequency agile design. Performance depends strongly on how rapidly core frequency is adapted to load. Frequency is governed by firmware, so optimal firmware is crucial.

Someone else speculated that the relevant firmware is the AMD provided "AGESA blob." Whatever it's called you're relying on the board vendor diligently bundling the latest version in a timely manner and not dragging their feet when there are issues with newer versions. Also, power supply is a function of board design and I wouldn't be surprised to learn of feedback mechanisms that afforded the board designer some degree of control; given the cost of high performance capacitors this actually seems pretty likely. If so then merely updating the blob may not be enough ensure optimum results.

In any case I suspect the firmware that shipped on 7/7 is far from optimal given the pressure to deliver, so pursuing newer firmware will likely have significant ongoing pay offs.


How's the ECC RAM support with the 3000 series?

Like previous Ryzen lineups the entire line has unbuffered ECC support. Whether or not it's officially supported will depend most on the motherboard & bios and if they tested & validated it.

At this point the Asus Pro WS X570-Ace ( https://www.asus.com/Motherboards/Pro-WS-X570-ACE/ ) appears to be the only x570 board with certified ECC support.


I'm wary of buying ASUS due to past personal experiences (defective Z97 motherboard and GPUs) and their Canadian RMA service being a dumpster fire.

The Gigabyte X570 AORUS XTREME (as another example) says in the specs

> Support for ECC Un-buffered DIMM 1Rx8/2Rx8 memory modules*

* ECC is only supported with AMD Ryzen™ PRO-series CPU.

Hmm.


How long will Asus keep updating that board with firwmare updates? In my experience Asus is horrible about releasing updates after a couple of years.

Asus is generally pretty good at updating the BIOS. My Asus X370-F gaming has already been updated with support for Ryzen 3rd gen.

I have multiple Kaby Lake boards which haven't received any BIOS updates in > a year. It depends upon the model line and as far as I can tell they will not tell you how long support is before you buy one.

What are you expecting an updated BIOS to do for that platform? Kaby Lake was the last CPU line for that chipset series since Coffee Lake shuffled some pins around. So what are you wanting updated that Asus hasn't provided?

There are issues with the BIOS on on of the systems randomly changing the m.2 PCIe speed between 4x and 2x. It only exists in the most current version of the BIOS but doesn't happen on older version.

Their are also other issues that should be fixed but haven't.


For what it's worth, Asus has provided excellent support for the PRIME X370-PRO motherboard. It was a bit of a gamble when I purchased a PowerSpec Ryzen system from Micro Center, but Asus has released regular BIOS updates, including support for higher-speed memory (e.g. I'm running 32 GB @ 3200MHz, Corsair Vengeance chips).

My only complaint would be the lack of detailed change logs, which are often only a single bullet point.


All ASRock AMD boards (or nearly all, but I haven't found any exceptions) support ECC memory.

I'm not sure about Asus' X570 line since they market a board with explicit ECC support, but their lower-end and mid-tier X470 boards supported ECC. Their higher end boards -- curiously -- did not.

You can generally tell if a board supports ECC by looking at the board's manual. If it supports ECC, it will be mentioned.


They are just firmwares today, not BIOSes, right?

I'm still waiting for the fix to be pushed for rdrand issue that breaks systemd.


What's the inter thread latency, ie between cores on these beasts?

Here's an inter-core latency chart that includes both the 8 and 12 core Zen 2's as well as a comparison a comparison w/ Zen+ and Intel's latest (CFL-R): https://www.reddit.com/r/Amd/comments/calue1/intercore_data_...

Intra-CCX latency is now awesome, and all inter-CCX (including intra-CCD) latency is not bad (and goes through IF) - what's interesting is that intra vs inter-CCD latency seems to be within a ns or two. Pretty impressive (considering it needs to bounce through the cIOD)!


Thank you kindly!

Legal | privacy