Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Why would you ever target wasting CPU/GPU when better alternatives are available?


sort by: page size:

They probably see alternatives as providing negative value vs. total clarity from the platform owner re: how to use the GPU.

From what I've seen the utilisation is still pretty poor, they're being used so poorly and most companies can get away with less GPUs. Instead of looking at how to optimise their workflow they just slap GPUs on

All those FPS wasted... Why do we keep calling these chips GPUs?

Production quality multiplatform software is much much harder and less fun to make for GPUs due to inferior DX, rampant driver stack bugs unique for each (gpu vendor, os) combination, sorry state of GPU programming languages, poor os integration, endemic security problems (eg memory safety not even recognised as a problem yet in gpu languages), fragmentation of proprietary sw stacks and APIs, etc. Creation of performance oriented software is often bottlenecked by sw engineering complexity, effort and cost, and targeting GPUs multiplies these problems.

tldr; we are not running out of reasons for wanting faster CPUs. GPUs are a crap, faustian bargain substitute for them.


The need for GPUs is a solid reason to stay away.

I don't like wasting a lot of energy in this kind of stuff. Also gpu prices...

Probably money gets wasted and GPUs go unused.

I don't see CPUs being competitive for low-latency inference in the web accessible SaaS ('software as a service') space. They certainly can be attractive for specialized backend applications where batch (in the macro-scheduling sense) processing can be utilzed. The author also neglects the attention that other GPU makers are investing in improving their software stacks, particularly AMD, to compete directly with Nvidia.

It's not about wasting energy, it's about using it when demanded. Games are about the most demanding thing you can do with a gpu, if it's throttled for games, I don't know when it wouldn't be.

How much of a waste is using NVidia hardware for this?

Nowadays GPUs factory defaults try to squeeze out the last bit of performance for a huge cost in power.

You can run them at half the power usage and only lose a fraction of the performance - at least in gaming. Try for AI tasks.


"Useless" means that both DP Gflops/s/W and DP Gflops/s/$ are worse for the modern AMD and NVIDIA gaming GPUs, than for many CPUs, so the latter are a better choice for such computations.

The opposite relationship between many AMD GPUs and the available CPUs was true until 5-6 years ago, while NVIDIA had reduced the DP computation abilities of their non-datacenter GPUs many years before AMD, despite their previous aggressive claims about GPGPU being the future of computation, which eventually proved to be true only for companies and governments with exceedingly deep pockets.


> cpu

Consumer-grade GPUs are better at this than even the highest end CPUs.


Low GPU utilisation is a serious issue.

Either you massively over-specced your hardware and should have chosen something cheaper and with less power consumption, or your graphics quality is far below what it could be.

Let's not beat around the bush: wasted resources are expensive, in some way or another.


Because here in the real world software isn’t GPU optimized in many cases.

If you're concerned about proportional performance gains, this product isn't for you. So why are you complaining about something you're not gonna buy anyway?

Most tech products that are the tip of the spear are bad value for money and less energy efficient than the ones down the range (Nvidia 3090 vs 3080 for example) as they pass the point of diminishing returns of what the design can do.

But they exist because there is a marke of enthusiasts for whom efficiency or price does not matter, they just want the best performance money can buy because maybe their business use case benefits from the 10% shorter render/compile times at the cost of 50% extra power consumption. Who are you to judge?


That seems to be the wrong way around. We're constrained by the availability of GPUs because those are the processors of choice for this kind of solution. If we had a better one we'd be using it!

The least efficient accelerator of all ... GPUs

Interesting, could you please elaborate on why you think this? Any data you know of on the subject? What are better alternatives in you view?


Let's just say that AI research can be useful, and thus this use of GPU time and electricity can be useful; in contrast to the alternative alternative-GPU-use, which is not useful in the first place.
next

Legal | privacy