I put off upgrading my 3 year old laptop last year waiting for the 7nm Zen and 7nm Nvidia parts. Soo far it looks like it was worth the wait. Now it's Nvidias turn.
Because in the last few years progress has been slow and mostly incremental with major leaps every 3-5 years.
As long as you can wait and upgrade when that major leap happens you're good for a few years.
That's why we follow tech news, leaks and rumours, to estimate when the next major leap will be and not be suckered into buying hardware that will be too quickly obsolete.
That's why HW manufacturers have NDAs in place. To keep the consumers buying the current stock instead of holding out for the next gen.
Valid question. My answer would be: because even though AMD made things quite interesting during 2019, I don't foresee them being able to go much further than what they did now.
Example: PCIe 4.0 has insane bandwidth allowances. I don't see any SSDs ever going beyond PCIe 4.0 bandwidth. Other iterative improvements like a few dozen MHz more in CPUs, or a few dozen more cores on the GPU, or a few more hundred MHz in RAM... they are usable without their potential being wasted in the new chipsets -- although I am not an expert, it does seem that way after several reads of the their parameters.
And even if next-gen stuff with bigger improvements is coming, I don't feel we can go much higher before these things hit big diminishing returns in noticeable performance in all but very specialised programs. Currently I am using a workstation with 10-core 4.0GHz CPU / 64GB 2666MHz DDR4 ECC / NVMe SSD at ~2.8 GB/s. I seriously cannot make any programming software even utilise that SSD to full capacity -- only Postgres manages to saturate it to 50% of its potential when doing `pg_restore`.
Granted I haven't run any deep learning stuff and I don't intend to. I am focusing on everyday and professional-but-not-strongly-specialised software. And there, I feel, stopping at motherboards with PCIe 4.0 and their appropriate AMD CPUs and uber-fast SSDs and the fastest RAM you can find today... will be more than enough for like 5 years.
Zen 4 will be PCI-E 5.0 and DDR5. Unless Zen 4 is a gigantic leap forward in performance and not an iteration. The cost of Jumping in those tech will be quite expensive.
So Zen 3, PCI-E 4.0, DDR4 will be like the sweet spot for desktop for a while.
( I do wish I am wrong and they drive down the cost of PCI-E 5.0 and DDR5 faster. But history has shown those tends to take 2 - 3 years to pick up the pace )
Additionally, nothing stops you -- except money -- to have a home server much more powerful than your MacBook and do you real development on that, using the MacBook as a thin client.
i think it's basically consumerism. i'm running a processor that's 10 years old with some RAM and SSD upgrades throughout the years. it's perfectly fine for daily golang/java/scala/haskell work.
a coworker of mine has a generation 2 ryzen which runs games like factorio perfectly fine - big setups (according to him) are pretty smooth. no GPU. he says he hasn't yet found any games that can't run fairly well on it yet.
it's funny because desktops are now basically the thin clients we thought we'd all be using in a futuristic tech setup, with the actual work being done on "the cloud". your desktop isn't fast enough to do any real "work" other than some unit tests, compiling, etc.
Any reason why not wait for AMD's RDNA 2.0? Or do you need CUDA?
I think my problem with Nvidia is they are ( for now ) hard to loathe. They were like the old Intel where their constant innovation just baffles me. And there is nothing but respect. ( Apart from their part with Apple's dispute ) But I have been an ATI Fans since early days. Still has an ATI Mach and Rage somewhere in my room.
It may take forever for a vendor to make a Mini PC like NUC which is comparable with AMD CPUs. I assume the chicken and egg problem here because the market is small and Intel NUC is already well established.
Is there a lot of advantage in the ultra compact form factor when a GPU is in play? I feel like you'd end up with all the same nuisances as trying to game on a laptop— especially thermal problems, lack of upgradability, and lower-powered "mobile" CPU and GPU options.
Quite apart from that, the NUCs are not exactly cheap. Gaming-ready ones can be $1000+, for which you can build a very decently specced MiniITX rig around a real desktop graphics card.
One of my recent MacBooks with i7-8569U is quite capable of gaming in 1280x800 (it looks good as it matches retina downscaling) in the usual Feral/Aspyr ports of games (on Mojave ofc, Catalina killed them off). AMD APUs should be even better, likely allowing 1080p gaming in mid-to-high details which could be good enough for many casual gamers.
Oh yeah, it's certainly possible. I'm on a Dell XPS 9570 with a discrete GPU and I can play most stuff I've tried at the native 1080p. That's mostly indies and EGS freebies, but some of them definitely still give the GPU a workout, and in those cases I absolutely experience screaming fans, frame drops, etc, even when giving lots of clearance to the fan inlets.
I get the impression that the GPU in this computer is much more intended for CAD, video processing, light AI work, etc than the kind of 100% duty cycle loading that running a game demands.
> Is there a lot of advantage in the ultra compact form factor when a GPU is in play?
Yes - most people don't have a lot of space for gadgets and things. Look at the advertising for Google's Stadia - it's partly that you don't need a console taking up space in your living room.
Most people never upgrade so don't care about that.
A slightly larger box that can fit a discrete gpu is suddenly "gadgets"?
The core reality is that if you don't have much space you don't have space for a barely functioning lump of a machine that only does excel spreadsheets and plays games only slightly better then a smartphone?
(oh but the useless computer does instagram nice in a staged apartment/suburban mcmansion)
Most people should probably start with something that might grow with them a little better so that every time they try something new it isn't a stuttery (thermal throttling, no gpu) awful experience.
I think the upside down thinking you are displaying here is how Apple has managed to completely hollow out their gaming and technical workstation sales...
They asked why people prefer smaller boxes. People prefer smaller boxes because they don't have much room in their houses. This is reality supported by for example Google's advertising approaches, which I guess is supported by data.
> (oh but the useless computer does instagram nice in a staged apartment/suburban mcmansion)
I don't know why you need to be snarky like this about what other people value.
I think you are just flat wrong about the physical reality of just about everyone.
If you and I teleported into the homes of large numbers of people all over the globe (with a tape measure in hand) we'd would find they had the extra space that a discrete gpu requires (roughly a paperbook book) exists for just about everyone.
I believe you don't fully believe what you are saying but instead are repeating the "corporate design" logic that allows companies to provide 5 year old laptop cpus in "nice" cases and get 80% margins out of the sales.
Not actual practical design at all, just corporate self serving nonsense.
I bought my dad an 8th gen NUC, attached it to the backplate of a display and now just wireless keyboard/mouse indicates presence of a computer. It saves a lot of desk space. The same performance as a 13" MacBook Pro, who needs more outside developers?
Well... gamers. That was what started this whole thread, was someone asking for a NUC-sized AMD machine to use as a SteamBox.
My argument is that that product category doesn't really make sense. You either want a gaming machine in which case it's going under a TV and you'll get way better bang for your buck in the Mini ITX form factor, or you don't want a gaming machine in which case you probably just want a laptop, or an iMac replacement (but either way, no discrete graphics).
Not a huge fan of 3D printed cases since they will probably not do super great with the heat over time. If you like this form factor, check out the mini-box M350 or the Velka Velcase 3.
The Velcase 3 has the interesting advantage that it can just use straight-up discrete GPUs (mITX style shorty cards) so you can just use a normal 3700X and say a 2070 or similar.
Yes, the lack of a socketed version of the new APUs was kind of disappointing, as was the lack of a B550 launch to support PCIe 4.0 on a cheaper board without a chipset fan.
Hopefully these follow sometime in the future but we may have to wait for computex.
>> Not a huge fan of 3D printed cases since they will probably not do super great with the heat over time. If you like this form factor, check out the mini-box M350 or the Velka Velcase 3.
I can see that. The original Mellori has been running over a year with no issues. The plastic is mostly 3mm thick, and the cooling solution covers most of the inside surface. It's my development machine, so it doesn't work as hard as gaming would push it. Compiling code will max all 8 threads for a minute or two, but the fan doesn't even spin up all the way.
>> the lack of a B550 launch to support PCIe 4.0 on a cheaper board without a chipset fan
Yeah, I was looking at the Aorus x570. It looks like the fan is attached to a NVME cooler. I could probably remove that and put a passive heat sink on. The case has good airflow right across that area when there's no GFX card in the slot - air has to go around the edges of the board and out the bottom. With an APU the only use for PCIe 4.0 is for the M.2 slots which are pretty fast even with 3.0. I'm also considering the Aorus B450I as a much cheaper alternative.
I've also considered having it printed from metal.
The article mentions it, but I stay consistently frustrated by my marriage to NVIDIA at this point. A heavy user of image processing, image mosaicing, tensorflow, et al - I should invest in NVIDIA stock.
I have been waiting for mobile 7nm Zen 2 to get a new laptop. However, it seems most of the newly introduced laptops with Zen 2 are gimped compared to Intel counterparts. Some AMD laptops from Lenovo don't have a high-res display options (Intel versions do), 16GB memory options (Intel versions do). Acer Swift 3 with Intel comes with high res 3:2 display. AMD version comes with 16:9 FHD. I hope the next generation Thinkpad line will be more interesting.
They announced these chips 2 days ago, there's still plenty of time for nicer laptops to get announced. Hopefully the Matebook D gets updated with these at least.
You are right, we need a bit of patience. Reviews of last generation Thinkpads with AMD have been very positive. I hope the next generation will be even better. I would love to see one with 16:10 screen, but that's rather unlikely.
No, you remember wrong, 5:4 was extremely uncommon, 1280x1024 is the only 5:4 resolution I have ever seen in the wild, compared to 320x240, 640x480, 800x600 and 1024x768.
(Does anyone else fondly remember the short moment in time when all laptops and all projectors had 1024x768 as their native resolution and you could expect presentations to just work?)
Microsoft at least makes a 3:2 AMD laptop, hopefully they'll update it with these chips. Too bad their hardware is generally not good at running Linux.
I think the article mostly means getting the most from the CPU, specifically at idle, via things like BIOS/OS power management (as opposed to purchase options).
I wouldn't hold my breath on that. More likely they'll switch to their A-series ARM processors at some point since the last thing they probably want to do is extend their dependence on x86.
> However, it seems most of the newly introduced laptops with Zen 2 are gimped compared to Intel counterparts
The Asus Zephyrus G14 is AMD-only and is their 14" flagship which they are calling the most powerful 14" laptop. Complete with your choice of either a 1080p 120hz panel or a WQHD (probably 60hz) one. Both of which are Pantone validated. And 32GB RAM.
I really hope they find a way to include a compatible Thunderbolt 3 experience. Using a Thunderbolt 3 Dock has become a must for me at this point, i'd hate to go back to multiple cables/less than 2*4k/60
I wonder if USB4 Devices will be actually really compatible with current Thunderbolt 3 Devices, or if it's a "technically yes but not really" situation.
also handily nosed past Intel's most recent full-on gaming CPU, the i7-9700K, on both content creation and physics engine benchmarks, despite being a mobile form factor with under half the TDP.
Your standard disclaimer about nm when referring to modern chips:
> Most recently, due to various marketing and discrepancies among foundries, the number itself has lost the exact meaning it once held. Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch. Nevertheless, the name convention has stuck and it's what the leading foundries call their nodes. Since around 2017 node names have been entirely overtaken by marketing with some leading-edge foundries using node names ambiguously to represent slightly modified processes. Additionally, the size, density, and performance of the transistors among foundries no longer matches between foundries. For example, Intel's 10 nm is comparable to foundries 7 nm while Intel's 7 nm is comparable to foundries 5 nm.
Everyone can check the frequency of the CPU so the one on the box mostly matches the one you'll get but no consumer can put the silicone under the electron microscope and measure all the transistor features and feel cheated that it doesn't match the advertised size.
Outside techies, most consumers don't know wtf nm is and how that correlates with performance, maybe some might even think that more nm is better, but on the other hand, most have a vague idea that more MHz is better.
That's one of the reasons why consumer grade tech has its specs totally different emphasized than the professional stuff. If you look at the ryzen box you won't find many specs but you'll find a VR logo. :)
Once, clock speed 'mattered' and was used in marketing to indicate a CPU that did work faster. There was a period of time where Hz was still used by marketing, yet it had stop reflecting how much work the CPU actually performed in a given amount of time. The public caught on that Hz mattered less.
Now, they're hoping nm matters in marketing (no as 'this is a faster chip' but 'this chip uses 'better' technology than TheOtherGuys/ThePreviousGeneration/WhatEver) for at least a few moments before the marketing team has to come up with some other differentiator.
"Do not compare CPUs by process technology or frequency. If you need to compare, use benchmarks that show performance and efficiency and even better, do it on your own application because the results vary between applications."
How do I compare CPUs on my own application without buying them? Is there some kind of daily-billed cloud computing service where I can choose any CPU and run my app on it?
Just use benchmarks that are published online that compare existing application on various hardware. Process node is basically meaningless for the user.
It will depend on the user. Gamers, for example, will not be interested in having their systems limping to get a bit of extra security when they already pay through the nose to get extra couple of percent of performance.
> How do I compare CPUs on my own application without buying them? Is there some kind of daily-billed cloud computing service where I can choose any CPU and run my app on it?
Sort of. If you look at EC2 instance types [1], about 1/2 of them have a specific CPU related to them. Same thing with Microsoft's Azure instances [2].
There are a limited number of cpus available - they typically are xenons or AMD equivalents, so if you want data on consumer CPUS, the best you can do is try your app out in the cloud and then look at published benchmarks and try to estimate what your expected performance should be.
This closely matches what happened to processor naming. It used to be that any two (CISC) CPUs with the same frequency were about on par with one-another, so we just called processors by their frequency. As soon as processors started adding things like SSE, though, that went out the window, since now “one cycle” could do arbitrary amounts of work, and also consume arbitrary amounts of electricity. So now we instead group processors by their manufacturer and model, and compare processors by “generation”, which ends up being a loose count of each time each vendor has done a large redesign that enabled efficiency increases beyond “just” a process-node shrink.
So: is there any analogous new convention for naming process nodes, now that you can’t just refer to them by their size? If there’s a dropdown select box on some form internal to e.g. Apple, where an engineer specifies what process-node of what fab they’d like to use to print a new chip—and said list has had some thought put into making it intuitively unambiguous—then what would the items in that list look like?
That was never the case. Go back even to 386 days and AMD or Cyrix were not always on par with Intel clock for clock. This only really flipped when Athlon came out.
You also had to think about DX vs SX, if you got L2 cache, 5x86 is really a 486, what is an Overdrive chip?, why does a PR166 CPU run at 100MHz, what is a Celeron, and why is 300A model so much faster than the original 300? Etc etc etc
Also, I assume the point of such commentary is really to emphasize that "Intel isn't really that far behind."
But it still is quite behind, considering TSMC's is on its second-generation "7nm"/10nm-equivalent, and Intel is basically still in "test-mode" with its 10nm process, and is nowhere near mass production with it.
Thanks for this. Do you happen to know which, if any, of the announced 7nm chips/processes are actually small enough to involve the problems of 7nm gates?
Obviously the names don't carry much weight, and speed can be checked directly instead of inferred from size, but I'm still curious about the actual state of the production issues.
I'll probably be downvoted, and sorry for the off-topic, but I miss those days when you could approximate computing power by looking at the processor's model number. 386SX-33, 386DX-40, 486DX2-66, and so on... Right now it's all becoming gibberish to me.. 1075, 4800u, 3950, 6600U. I don't even remember what CPU do I have. All the romance is gone!
I suspect that's been deliberate at least on Intel's part since their year over year changes haven't been compelling for the better part of a decade. So their model numbers indicate 'new' without really quantifying 'improved' since that hasn't been a very good story for a while now.
That's just fundamental. In the golden age of VLSI scaling, from 1985 to 2003[1] or so, we really were seeing that doubling of transistor density with every 1.5-2 year generation, and the shrinking transistors were getting faster more or less linearly with size, leading to a quadratic improvement over time. Those days are almost two decades in the past, and they aren't coming back, ever.
We happen to be seeing a bump right now due to AMD migrating across two process nodes at once (and Intel having fallen a little behind), but that's just an instantaneous thing. The days of the 486 are still gone.
[1] My own dates. I'm bounding this roughly by the point where EDA tools caught up with process improvements (allowing straightforward die shrinks and scaling of logic) and the end of "free" frequency scaling due to thermal and power limits. Someone else probably has a different definition. Fight me.
It's gotten very complicated especially since a lot of the value add of the newer processors is that they're more power-efficient and not necessarily much more powerful than the previous version.
>I don't even remember what CPU do I have. All the romance is gone!
I don't particularly remember feeling romantic about processor naming schemes; I do remember certain processors for being workhorses. However, I would agree that today, one needs to acquire an informed view to discern processor design and performance, variously across different architectures and manufacturers, over a period of time.
Numbers like 8088 vs 80286 vs 80386DX vs 80386SX (which was basically a 286 iirc) were confusing to people back then too. :) Then you had the 486SX and DX and Cyrix introducing things like the 486DLC...
Haha, that's because you're older now. I remember that feeling too, and I think the newer generation that sits on /r/buildapc or /r/pcgaming are definitely still feeling that love.
Sorry if this is off-topic but how would this chip compare to an ARM design that could handle 16 threads? Would thermal dissipation, power consumption, and performance be roughly the same or would they be significantly different? I want to be excited about this chip but thought that this was supposed to be the year we all get ARM-powered laptops/Chromebooks.
There's several popular ARM cores around these days. You would need to pick a specific processor family, at least.
That said, I have seen plenty of ARM powered Chromebooks and I don't think any of them had more than four cpus or were known for high performance. It's not impossible to make a high performance ARM chromebook, but it's not the market people are building for. High performance x86 Chromebooks happen because once you've built for a dual core mainstream x86 laptop processor, you can also easily solder in a higher performance processor with the same footprint.
You can't expect comparable core for core performance as the architectures have different objectives. Thermals, power consumption and performance would all be significantly lower on existing mainstream ARM chips.
ARM vs. X86 is just the instruction set. The instruction set plays almost no role in power / performance these days. Maybe once in a very distant past when an x86 decoder was a significant chunk of silicon, but not these days.
So instead you'd need to ask how does Zen 2 compare to Cortex A76 or something like that. Which we don't know since we don't have the 15W Zen 2 mobile parts to benchmark yet.
> Intel focused on AI acceleration—but AMD went unapologetically hard on gaming.
I don't think you can go wrong focusing on gaming. This is my opinion of how Microsoft succeeded so well. The network effects push all of computing forward.
Does this not all just boil down to marketing? GPUs turned out to be a outsized boon to ML performance, but they go there by fulfilling the needs of gamers.
I feel like this just boils down to what we've known for a while: Intel is catering to an enterprise crowd while AMD is catering to the average PC enthusiast.
Stadia doesn't run on fairy dust. It still uses CPUs & GPUs, and the more people use Stadia the more it will need. And since Stadia uses AMD GPUs I don't think AMD would be that sad about selling more super high margin enterprise GPUs to Google.
And of course you still need that local client to play it, which might as well be an AMD-powered ultrabook. At which point Stadia just resulted in AMD selling up to 3 products instead of just 1 (GPU & CPU in the server + APU in the laptop)
I would not say “stuck”. They have good reason for telling Nvidia to fuck off. Maybe you remember some time ago where Nvidia graphics cards didn’t meet the spec that they gave to Apple, and thus overheated and de-soldered themselves from MacBook Pro’s. This was not helped by the fact that Nvidia insisted on integrating itself into the north bridge of the motherboard. Meaning a card failure was fatal in the weirdest of ways.
Or the other time where Nvidia didn’t ship their graphics chip to them with support for the old school Cinema Display. So they sat in warehouses for 18months while Nvidia produced the part and only then, Apple could start integrating.
Then Nvidia tried to sue Samsung+Qualcomm when the iPhone came out (because apple used GPUs from these companies) claiming patent infringement.
Then in 2016 Apple said “no” to Nvidia on putting their GPUs into their laptops due to power consumption to performance concerns. And apparently that was not a healthy conversation.
Nvidia is a bully, and Apple is a big player. I will not shed a tear over 10% performance in predominantly game workloads.
Intel on the other hand has an edge with thunderbolt right now, but that’s an open standard and Apple could produce an AMD machine with thunderbolt now. I would suspect that they’ll live with the lesser performance until ARM is possible.
>Then Nvidia tried to sue Samsung+Qualcomm when the iPhone came out (because apple used GPUs from these companies) claiming patent infringement.
Apple has always been using PowerVR for iPhone since day 1. Even the recent so called self designed GPU are pretty much 80% PowerVR with all PowerVR proprietary and patented features.
reply