Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
AMD’s third shoe drops at CES 2020: 7nm Zen 2 mobile CPUs (arstechnica.com) similar stories update story
207 points by vo2maxer | karma 9097 | avg karma 3.81 2020-01-08 06:57:51 | hide | past | favorite | 125 comments



view as:

I put off upgrading my 3 year old laptop last year waiting for the 7nm Zen and 7nm Nvidia parts. Soo far it looks like it was worth the wait. Now it's Nvidias turn.

But next year even new/better CPUs and GPUs will appear, right? Why not wait until next year?

Because in the last few years progress has been slow and mostly incremental with major leaps every 3-5 years.

As long as you can wait and upgrade when that major leap happens you're good for a few years.

That's why we follow tech news, leaks and rumours, to estimate when the next major leap will be and not be suckered into buying hardware that will be too quickly obsolete.

That's why HW manufacturers have NDAs in place. To keep the consumers buying the current stock instead of holding out for the next gen.


Valid question. My answer would be: because even though AMD made things quite interesting during 2019, I don't foresee them being able to go much further than what they did now.

Example: PCIe 4.0 has insane bandwidth allowances. I don't see any SSDs ever going beyond PCIe 4.0 bandwidth. Other iterative improvements like a few dozen MHz more in CPUs, or a few dozen more cores on the GPU, or a few more hundred MHz in RAM... they are usable without their potential being wasted in the new chipsets -- although I am not an expert, it does seem that way after several reads of the their parameters.

And even if next-gen stuff with bigger improvements is coming, I don't feel we can go much higher before these things hit big diminishing returns in noticeable performance in all but very specialised programs. Currently I am using a workstation with 10-core 4.0GHz CPU / 64GB 2666MHz DDR4 ECC / NVMe SSD at ~2.8 GB/s. I seriously cannot make any programming software even utilise that SSD to full capacity -- only Postgres manages to saturate it to 50% of its potential when doing `pg_restore`.

Granted I haven't run any deep learning stuff and I don't intend to. I am focusing on everyday and professional-but-not-strongly-specialised software. And there, I feel, stopping at motherboards with PCIe 4.0 and their appropriate AMD CPUs and uber-fast SSDs and the fastest RAM you can find today... will be more than enough for like 5 years.

All IMO of course.


Early hints of Zen 3 sound pretty impressive, but it sounds like you're at a good stopping point upgrade-wise for a while :)

Even post-mitigations Sandy Bridge is still "good enough" for most people IMO, even though current chips are much nicer.


There is another data point.

Zen 4 will be PCI-E 5.0 and DDR5. Unless Zen 4 is a gigantic leap forward in performance and not an iteration. The cost of Jumping in those tech will be quite expensive.

So Zen 3, PCI-E 4.0, DDR4 will be like the sweet spot for desktop for a while.

( I do wish I am wrong and they drive down the cost of PCI-E 5.0 and DDR5 faster. But history has shown those tends to take 2 - 3 years to pick up the pace )


I have the same feeling. A lot of people will stay on Zen 3 / PCIe 4.0 / DDR4 for years and PCIe 5.0 will be for min-maxers.

Hell, even the current high-end PCIe 3.0 setups like mine seem to be quite future-proof.


Well of coz that is from a PC perspective, and I am stuck on Mac......

I'm "stuck" on an iMac Pro. :D

It's an amazing machine in every way.


Additionally, nothing stops you -- except money -- to have a home server much more powerful than your MacBook and do you real development on that, using the MacBook as a thin client.

i think it's basically consumerism. i'm running a processor that's 10 years old with some RAM and SSD upgrades throughout the years. it's perfectly fine for daily golang/java/scala/haskell work.

a coworker of mine has a generation 2 ryzen which runs games like factorio perfectly fine - big setups (according to him) are pretty smooth. no GPU. he says he hasn't yet found any games that can't run fairly well on it yet.

it's funny because desktops are now basically the thin clients we thought we'd all be using in a futuristic tech setup, with the actual work being done on "the cloud". your desktop isn't fast enough to do any real "work" other than some unit tests, compiling, etc.


Any reason why not wait for AMD's RDNA 2.0? Or do you need CUDA?

I think my problem with Nvidia is they are ( for now ) hard to loathe. They were like the old Intel where their constant innovation just baffles me. And there is nothing but respect. ( Apart from their part with Apple's dispute ) But I have been an ATI Fans since early days. Still has an ATI Mach and Rage somewhere in my room.


Now please get some proper high-end laptops (HiDPI, 32GB, TB3) and a NUC clone that can be used as a SteamBox.

It may take forever for a vendor to make a Mini PC like NUC which is comparable with AMD CPUs. I assume the chicken and egg problem here because the market is small and Intel NUC is already well established.


I saw but most of them have different targets than consumers, so still no.


ZOTAC has been selling AMD based Mini PCs for a while, I don't see why they shouldn't make one based on this new CPU:

https://www.zotac.com/us/product/mini_pcs/all?processor=AMD


Zotac has a better Ryzen-based computer in their Magnus line with a proper GPU though...

Whats so mini about those NUCs when the power bricks for them have become larger than the actual computer?

The large NUCs are kinda weird I agree (I treat them as a stretch from Intel), but the small-sized ones are super cool.

The small sized ones are the ones with a power brick larger than the actual computer :)

Mine have quite small power bricks. Which one do you have?


Is there a lot of advantage in the ultra compact form factor when a GPU is in play? I feel like you'd end up with all the same nuisances as trying to game on a laptop— especially thermal problems, lack of upgradability, and lower-powered "mobile" CPU and GPU options.

Quite apart from that, the NUCs are not exactly cheap. Gaming-ready ones can be $1000+, for which you can build a very decently specced MiniITX rig around a real desktop graphics card.


One of my recent MacBooks with i7-8569U is quite capable of gaming in 1280x800 (it looks good as it matches retina downscaling) in the usual Feral/Aspyr ports of games (on Mojave ofc, Catalina killed them off). AMD APUs should be even better, likely allowing 1080p gaming in mid-to-high details which could be good enough for many casual gamers.

Oh yeah, it's certainly possible. I'm on a Dell XPS 9570 with a discrete GPU and I can play most stuff I've tried at the native 1080p. That's mostly indies and EGS freebies, but some of them definitely still give the GPU a workout, and in those cases I absolutely experience screaming fans, frame drops, etc, even when giving lots of clearance to the fan inlets.

I get the impression that the GPU in this computer is much more intended for CAD, video processing, light AI work, etc than the kind of 100% duty cycle loading that running a game demands.


> Is there a lot of advantage in the ultra compact form factor when a GPU is in play?

Yes - most people don't have a lot of space for gadgets and things. Look at the advertising for Google's Stadia - it's partly that you don't need a console taking up space in your living room.

Most people never upgrade so don't care about that.


What do you even mean by this?

A slightly larger box that can fit a discrete gpu is suddenly "gadgets"?

The core reality is that if you don't have much space you don't have space for a barely functioning lump of a machine that only does excel spreadsheets and plays games only slightly better then a smartphone?

(oh but the useless computer does instagram nice in a staged apartment/suburban mcmansion)

Most people should probably start with something that might grow with them a little better so that every time they try something new it isn't a stuttery (thermal throttling, no gpu) awful experience.

I think the upside down thinking you are displaying here is how Apple has managed to completely hollow out their gaming and technical workstation sales...


> What do you even mean by this?

They asked why people prefer smaller boxes. People prefer smaller boxes because they don't have much room in their houses. This is reality supported by for example Google's advertising approaches, which I guess is supported by data.

> (oh but the useless computer does instagram nice in a staged apartment/suburban mcmansion)

I don't know why you need to be snarky like this about what other people value.

> upside down thinking you are displaying here

You're just being nasty now.


I think you are just flat wrong about the physical reality of just about everyone.

If you and I teleported into the homes of large numbers of people all over the globe (with a tape measure in hand) we'd would find they had the extra space that a discrete gpu requires (roughly a paperbook book) exists for just about everyone.

I believe you don't fully believe what you are saying but instead are repeating the "corporate design" logic that allows companies to provide 5 year old laptop cpus in "nice" cases and get 80% margins out of the sales.

Not actual practical design at all, just corporate self serving nonsense.


I bought my dad an 8th gen NUC, attached it to the backplate of a display and now just wireless keyboard/mouse indicates presence of a computer. It saves a lot of desk space. The same performance as a 13" MacBook Pro, who needs more outside developers?

Well... gamers. That was what started this whole thread, was someone asking for a NUC-sized AMD machine to use as a SteamBox.

My argument is that that product category doesn't really make sense. You either want a gaming machine in which case it's going under a TV and you'll get way better bang for your buck in the Mini ITX form factor, or you don't want a gaming machine in which case you probably just want a laptop, or an iMac replacement (but either way, no discrete graphics).


You can plug eGPU to NUC and hide it under the desk, it has TB3.

Sure, but that's neither space nor cost effective. At that point it definitely makes sense to have just done a MiniITX build.

There is ZOTAC Magnus that is slightly larger and features everything from 1060 to 2080 for gamers.

>> a NUC clone that can be used as a SteamBox.

It's not quite that small but you might like this: https://github.com/phkahler/mellori_ITX

It's Also available on thingiverse: https://www.thingiverse.com/thing:3262778

Uses an ITX board but is 190x195x60mm. I'm waiting for the next gen APUs to pick a new board and do another iteration. Where are the 4000 series APUs?


Not a huge fan of 3D printed cases since they will probably not do super great with the heat over time. If you like this form factor, check out the mini-box M350 or the Velka Velcase 3.

The Velcase 3 has the interesting advantage that it can just use straight-up discrete GPUs (mITX style shorty cards) so you can just use a normal 3700X and say a 2070 or similar.

Yes, the lack of a socketed version of the new APUs was kind of disappointing, as was the lack of a B550 launch to support PCIe 4.0 on a cheaper board without a chipset fan.

Hopefully these follow sometime in the future but we may have to wait for computex.


>> Not a huge fan of 3D printed cases since they will probably not do super great with the heat over time. If you like this form factor, check out the mini-box M350 or the Velka Velcase 3.

I can see that. The original Mellori has been running over a year with no issues. The plastic is mostly 3mm thick, and the cooling solution covers most of the inside surface. It's my development machine, so it doesn't work as hard as gaming would push it. Compiling code will max all 8 threads for a minute or two, but the fan doesn't even spin up all the way.

>> the lack of a B550 launch to support PCIe 4.0 on a cheaper board without a chipset fan

Yeah, I was looking at the Aorus x570. It looks like the fan is attached to a NVME cooler. I could probably remove that and put a passive heat sink on. The case has good airflow right across that area when there's no GFX card in the slot - air has to go around the edges of the board and out the bottom. With an APU the only use for PCIe 4.0 is for the M.2 slots which are pretty fast even with 3.0. I'm also considering the Aorus B450I as a much cheaper alternative.

I've also considered having it printed from metal.


I’ve got a DeskMini system with a 3400G. 1.9 Litres + a power brick is manageable under a TV.

The article mentions it, but I stay consistently frustrated by my marriage to NVIDIA at this point. A heavy user of image processing, image mosaicing, tensorflow, et al - I should invest in NVIDIA stock.

Warren Buffet recommended investing in the companies who make products you can't not use.

> Warren Buffet recommended investing in the companies who make products you can't not use.

Let's put it this way: My taste for products that I can't not use is rather different from the majority of society.


Warren Buffet's recommendation may particularly relevant to himself because of his aw shucks main street tastes.

I have been waiting for mobile 7nm Zen 2 to get a new laptop. However, it seems most of the newly introduced laptops with Zen 2 are gimped compared to Intel counterparts. Some AMD laptops from Lenovo don't have a high-res display options (Intel versions do), 16GB memory options (Intel versions do). Acer Swift 3 with Intel comes with high res 3:2 display. AMD version comes with 16:9 FHD. I hope the next generation Thinkpad line will be more interesting.

They announced these chips 2 days ago, there's still plenty of time for nicer laptops to get announced. Hopefully the Matebook D gets updated with these at least.

You are right, we need a bit of patience. Reviews of last generation Thinkpads with AMD have been very positive. I hope the next generation will be even better. I would love to see one with 16:10 screen, but that's rather unlikely.

Let's make it 3:2! I make extensive use of either 2 windows side by side, or multiple windows in Emacs. I never have horizontal space issues.

I play a lot of retro games, so why not 4:3!?

I don't remember 4:3 being frequent, wasnt almost everything 5:4?

No, you remember wrong, 5:4 was extremely uncommon, 1280x1024 is the only 5:4 resolution I have ever seen in the wild, compared to 320x240, 640x480, 800x600 and 1024x768.

(Does anyone else fondly remember the short moment in time when all laptops and all projectors had 1024x768 as their native resolution and you could expect presentations to just work?)


Microsoft at least makes a 3:2 AMD laptop, hopefully they'll update it with these chips. Too bad their hardware is generally not good at running Linux.

What is currently out for mobile is Zen+ not Zen 2

I wasn't clear enough. I meant newly introduced laptops at CES 2020 with Ryzen 4000 chips. None of them are on the market AFAIK.

That's the integration shortfall that the article mentions.

I think the article mostly means getting the most from the CPU, specifically at idle, via things like BIOS/OS power management (as opposed to purchase options).

It's possible that these new chips might enough to push Apple into switching. That would likely kick the other OEMs into high gear.

I wouldn't hold my breath on that. More likely they'll switch to their A-series ARM processors at some point since the last thing they probably want to do is extend their dependence on x86.

What you're talking about is HS, they're never going to make high end MacBooks use ARM, at least not in the following years to come.

> However, it seems most of the newly introduced laptops with Zen 2 are gimped compared to Intel counterparts

The Asus Zephyrus G14 is AMD-only and is their 14" flagship which they are calling the most powerful 14" laptop. Complete with your choice of either a 1080p 120hz panel or a WQHD (probably 60hz) one. Both of which are Pantone validated. And 32GB RAM.


Those new ASUS laptops are certainly not gimped. They’re very premium.

I really hope they find a way to include a compatible Thunderbolt 3 experience. Using a Thunderbolt 3 Dock has become a must for me at this point, i'd hate to go back to multiple cables/less than 2*4k/60

I share that sentiment and I can't wait for USB 4 to show up in consumer products.

I wonder if USB4 Devices will be actually really compatible with current Thunderbolt 3 Devices, or if it's a "technically yes but not really" situation.

There exist desktop AMD motherboards with Thunderbolt 3. Laptops will probably be trickier due to power use.

also handily nosed past Intel's most recent full-on gaming CPU, the i7-9700K, on both content creation and physics engine benchmarks, despite being a mobile form factor with under half the TDP.

...wow


Your standard disclaimer about nm when referring to modern chips:

> Most recently, due to various marketing and discrepancies among foundries, the number itself has lost the exact meaning it once held. Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch. Nevertheless, the name convention has stuck and it's what the leading foundries call their nodes. Since around 2017 node names have been entirely overtaken by marketing with some leading-edge foundries using node names ambiguously to represent slightly modified processes. Additionally, the size, density, and performance of the transistors among foundries no longer matches between foundries. For example, Intel's 10 nm is comparable to foundries 7 nm while Intel's 7 nm is comparable to foundries 5 nm.

https://en.wikichip.org/wiki/technology_node

So 7nm does not mean the chip has 7nm features, it means that the marketing team called it "7nm".


nm is the new Hz

Not really.

Everyone can check the frequency of the CPU so the one on the box mostly matches the one you'll get but no consumer can put the silicone under the electron microscope and measure all the transistor features and feel cheated that it doesn't match the advertised size.

Outside techies, most consumers don't know wtf nm is and how that correlates with performance, maybe some might even think that more nm is better, but on the other hand, most have a vague idea that more MHz is better.

That's one of the reasons why consumer grade tech has its specs totally different emphasized than the professional stuff. If you look at the ryzen box you won't find many specs but you'll find a VR logo. :)


For marketing purposes, nm is the new Hz.

Once, clock speed 'mattered' and was used in marketing to indicate a CPU that did work faster. There was a period of time where Hz was still used by marketing, yet it had stop reflecting how much work the CPU actually performed in a given amount of time. The public caught on that Hz mattered less.

Now, they're hoping nm matters in marketing (no as 'this is a faster chip' but 'this chip uses 'better' technology than TheOtherGuys/ThePreviousGeneration/WhatEver) for at least a few moments before the marketing team has to come up with some other differentiator.


remember the p-rating ?

As in TVs panel refresh rate?

As in "new CPU with 4 GHz, far better than the old 3 GHz CPUs"

To complete disclaimer:

"Do not compare CPUs by process technology or frequency. If you need to compare, use benchmarks that show performance and efficiency and even better, do it on your own application because the results vary between applications."


How do I compare CPUs on my own application without buying them? Is there some kind of daily-billed cloud computing service where I can choose any CPU and run my app on it?

Just use benchmarks that are published online that compare existing application on various hardware. Process node is basically meaningless for the user.

Previously published benchmarks may or may not have the current spectre/meltdown patch applied.

It will depend on the user. Gamers, for example, will not be interested in having their systems limping to get a bit of extra security when they already pay through the nose to get extra couple of percent of performance.

They might not have a choice.

Laptops also frequently have significant performance variations, even with the same chip, due to differing cooling solutions.

> How do I compare CPUs on my own application without buying them? Is there some kind of daily-billed cloud computing service where I can choose any CPU and run my app on it?

Sort of. If you look at EC2 instance types [1], about 1/2 of them have a specific CPU related to them. Same thing with Microsoft's Azure instances [2].

There are a limited number of cpus available - they typically are xenons or AMD equivalents, so if you want data on consumer CPUS, the best you can do is try your app out in the cloud and then look at published benchmarks and try to estimate what your expected performance should be.

___

1. https://aws.amazon.com/ec2/instance-types/

2. https://azure.microsoft.com/en-us/pricing/details/virtual-ma...


Careful about the clouds, performance under virtualization has many pitfalls and in some workloads is very much unlike bare metal.

The correlation of bare-metal cores vs cloud vCPUs is very much nonlinear.


This closely matches what happened to processor naming. It used to be that any two (CISC) CPUs with the same frequency were about on par with one-another, so we just called processors by their frequency. As soon as processors started adding things like SSE, though, that went out the window, since now “one cycle” could do arbitrary amounts of work, and also consume arbitrary amounts of electricity. So now we instead group processors by their manufacturer and model, and compare processors by “generation”, which ends up being a loose count of each time each vendor has done a large redesign that enabled efficiency increases beyond “just” a process-node shrink.

So: is there any analogous new convention for naming process nodes, now that you can’t just refer to them by their size? If there’s a dropdown select box on some form internal to e.g. Apple, where an engineer specifies what process-node of what fab they’d like to use to print a new chip—and said list has had some thought put into making it intuitively unambiguous—then what would the items in that list look like?


Yep, but now we refer to chips by their cost, instead.

AMD Threadripper 3990X : $3990


That's only true when it's first released

That was never the case. Go back even to 386 days and AMD or Cyrix were not always on par with Intel clock for clock. This only really flipped when Athlon came out.

You also had to think about DX vs SX, if you got L2 cache, 5x86 is really a 486, what is an Overdrive chip?, why does a PR166 CPU run at 100MHz, what is a Celeron, and why is 300A model so much faster than the original 300? Etc etc etc


That goes the same for Intel's "10nm", too.

Also, I assume the point of such commentary is really to emphasize that "Intel isn't really that far behind."

But it still is quite behind, considering TSMC's is on its second-generation "7nm"/10nm-equivalent, and Intel is basically still in "test-mode" with its 10nm process, and is nowhere near mass production with it.


Thanks for this. Do you happen to know which, if any, of the announced 7nm chips/processes are actually small enough to involve the problems of 7nm gates?

Obviously the names don't carry much weight, and speed can be checked directly instead of inferred from size, but I'm still curious about the actual state of the production issues.


Side effect of these: SFF, NUC, fanless desktops too.

AMD should market it as "7en".

That looks disturbingly reminiscent of a certain David Fincher movie.

Could signify an end to INTC's most recent 7 MDS sins.

So it was CPU in the box?

I'll probably be downvoted, and sorry for the off-topic, but I miss those days when you could approximate computing power by looking at the processor's model number. 386SX-33, 386DX-40, 486DX2-66, and so on... Right now it's all becoming gibberish to me.. 1075, 4800u, 3950, 6600U. I don't even remember what CPU do I have. All the romance is gone!

On the upside, why worry about the features or clock speed when you can use a processor's model number to estimate its price? [0]

[0]: https://www.anandtech.com/show/15318/amds-64core-threadrippe...


I think that's called shifting priorities, and I can relate.

I suspect that's been deliberate at least on Intel's part since their year over year changes haven't been compelling for the better part of a decade. So their model numbers indicate 'new' without really quantifying 'improved' since that hasn't been a very good story for a while now.

That's just fundamental. In the golden age of VLSI scaling, from 1985 to 2003[1] or so, we really were seeing that doubling of transistor density with every 1.5-2 year generation, and the shrinking transistors were getting faster more or less linearly with size, leading to a quadratic improvement over time. Those days are almost two decades in the past, and they aren't coming back, ever.

We happen to be seeing a bump right now due to AMD migrating across two process nodes at once (and Intel having fallen a little behind), but that's just an instantaneous thing. The days of the 486 are still gone.

[1] My own dates. I'm bounding this roughly by the point where EDA tools caught up with process improvements (allowing straightforward die shrinks and scaling of logic) and the end of "free" frequency scaling due to thermal and power limits. Someone else probably has a different definition. Fight me.


It's gotten very complicated especially since a lot of the value add of the newer processors is that they're more power-efficient and not necessarily much more powerful than the previous version.

>I don't even remember what CPU do I have. All the romance is gone!

I don't particularly remember feeling romantic about processor naming schemes; I do remember certain processors for being workhorses. However, I would agree that today, one needs to acquire an informed view to discern processor design and performance, variously across different architectures and manufacturers, over a period of time.

https://www.anandtech.com/show/14043/upgrading-from-an-intel...

If you feel lost within the Intel naming convention, at least they provide a very handy guide.

https://www.intel.com/content/www/us/en/processors/processor...


If you owned a 286 or a slow 386, owning a 486 with math co-processor was definitely a romantic desire in 1990.

>> If you owned a 286 or a slow 386, owning a 486 with math co-processor was definitely a romantic desire in 1990.

And the introduction of the 488sx shattered the dream of everyone finally having floating point hardware by default.


Numbers like 8088 vs 80286 vs 80386DX vs 80386SX (which was basically a 286 iirc) were confusing to people back then too. :) Then you had the 486SX and DX and Cyrix introducing things like the 486DLC...

Haha, that's because you're older now. I remember that feeling too, and I think the newer generation that sits on /r/buildapc or /r/pcgaming are definitely still feeling that love.

Sorry if this is off-topic but how would this chip compare to an ARM design that could handle 16 threads? Would thermal dissipation, power consumption, and performance be roughly the same or would they be significantly different? I want to be excited about this chip but thought that this was supposed to be the year we all get ARM-powered laptops/Chromebooks.

There's several popular ARM cores around these days. You would need to pick a specific processor family, at least.

That said, I have seen plenty of ARM powered Chromebooks and I don't think any of them had more than four cpus or were known for high performance. It's not impossible to make a high performance ARM chromebook, but it's not the market people are building for. High performance x86 Chromebooks happen because once you've built for a dual core mainstream x86 laptop processor, you can also easily solder in a higher performance processor with the same footprint.


You can't expect comparable core for core performance as the architectures have different objectives. Thermals, power consumption and performance would all be significantly lower on existing mainstream ARM chips.

ARM vs. X86 is just the instruction set. The instruction set plays almost no role in power / performance these days. Maybe once in a very distant past when an x86 decoder was a significant chunk of silicon, but not these days.

So instead you'd need to ask how does Zen 2 compare to Cortex A76 or something like that. Which we don't know since we don't have the 15W Zen 2 mobile parts to benchmark yet.


It's super annoying that AMD's 3000 desktop line is 7nm zen2, while their 3000 laptop line is 12nm zen+.

It's confusing and dishonest. Multiple times I've jumped at almost getting a Ryzen 3000 laptop only to remember that it's not really Ryzen 3000.


It's only confusing to a small minority of buyers.

Most people who buy these laptops know nothing about Zen2, Zen1, Zen+.


> Intel focused on AI acceleration—but AMD went unapologetically hard on gaming.

I don't think you can go wrong focusing on gaming. This is my opinion of how Microsoft succeeded so well. The network effects push all of computing forward.


Does this not all just boil down to marketing? GPUs turned out to be a outsized boon to ML performance, but they go there by fulfilling the needs of gamers.

I feel like this just boils down to what we've known for a while: Intel is catering to an enterprise crowd while AMD is catering to the average PC enthusiast.


Stadia etc could still ruin it for AMD. You no longer need local power to game

The verdict is out there. Reviews of Stadia haven't been flattering.

The new OnLive? Nah

Stadia doesn't run on fairy dust. It still uses CPUs & GPUs, and the more people use Stadia the more it will need. And since Stadia uses AMD GPUs I don't think AMD would be that sad about selling more super high margin enterprise GPUs to Google.

And of course you still need that local client to play it, which might as well be an AMD-powered ultrabook. At which point Stadia just resulted in AMD selling up to 3 products instead of just 1 (GPU & CPU in the server + APU in the laptop)


Stadia doesn’t target enthusiast gamers, I don’t know why people are trying to judge it on those merits.

Stadia is for the person who has 1hr every few days to play a game casually on the TV or 20 minutes during lunch.

It caters perfectly for the crowd of people who do not want to be locked to a specific device (gaming pc) or update their console/games.

The high static time cost of gaming in 2020 is what stadia mitigates.

And anyway. Stadia is powered by AMD underneath.


That's true for a very small group of cities with Google fibre.

Probably good only for Google employees who are forced to use Chromebooks.


Good thing Stadia uses AMD GPUs then :)

Meantime, Apple is stuck with Intel for CPUs and AMD for GPUs — the worst of both worlds.

I would not say “stuck”. They have good reason for telling Nvidia to fuck off. Maybe you remember some time ago where Nvidia graphics cards didn’t meet the spec that they gave to Apple, and thus overheated and de-soldered themselves from MacBook Pro’s. This was not helped by the fact that Nvidia insisted on integrating itself into the north bridge of the motherboard. Meaning a card failure was fatal in the weirdest of ways.

Or the other time where Nvidia didn’t ship their graphics chip to them with support for the old school Cinema Display. So they sat in warehouses for 18months while Nvidia produced the part and only then, Apple could start integrating.

Then Nvidia tried to sue Samsung+Qualcomm when the iPhone came out (because apple used GPUs from these companies) claiming patent infringement.

Then in 2016 Apple said “no” to Nvidia on putting their GPUs into their laptops due to power consumption to performance concerns. And apparently that was not a healthy conversation.

Nvidia is a bully, and Apple is a big player. I will not shed a tear over 10% performance in predominantly game workloads.

Intel on the other hand has an edge with thunderbolt right now, but that’s an open standard and Apple could produce an AMD machine with thunderbolt now. I would suspect that they’ll live with the lesser performance until ARM is possible.


>Then Nvidia tried to sue Samsung+Qualcomm when the iPhone came out (because apple used GPUs from these companies) claiming patent infringement.

Apple has always been using PowerVR for iPhone since day 1. Even the recent so called self designed GPU are pretty much 80% PowerVR with all PowerVR proprietary and patented features.


Legal | privacy