Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
How a Shifting AI Chip Market Will Shape Nvidia's Future (www.wsj.com) similar stories update story
6 points by mfiguiere | karma 64616 | avg karma 9.21 2024-02-26 00:49:59 | hide | past | favorite | 7 comments



view as:

From a high-level design standpoint, wouldn’t the general-purposeness of NVIDIA’s GPUs (even if they do have some AI/LLM optimizations) put them generally at a disadvantage compared to more custom/dedicated inference designs? (Disregarding real-world issues like startup execution risks, assume competitors succeed at their engineering goals) Or is there some fundamental architectural reason why NVIDIA can/will always be highly competitive in AI inference? Is the general-purposeness of the GPU not as much of an overhead/disadvantage as it seems?

Also how critical is NVIDIA’s infiniband networking advantage when it comes to inference workloads?


Custom chips have to be much better than Nvidia to become attractive. Being 2x faster won’t be enough, 5x faster might be. Assuming perfectly functioning software.

Is software that important on the inference side, assuming all the key ops are supported by the compiler? Once the model is quantized and frozen the deployment to alternative chips while somewhat cumbersome hasn’t been too challenging, at least in my experience with Qualcomm NPU deployment (trained on NVIDIA)

Let me put it this way: if there’s even a slightest issue with my Pytorch code (training or inference) running on a non Nvidia chip it will be an automatic no from me. More than that - if I simply suspect there will be any issues I will not try it. Regardless of any promised speedups.

Whoever wants to sell me their chip better do an amazing demo of their flawless software integration.


What an approach.

It is very simple math

If the savings on hw/compute are greater than cost of adjustments, then it is probably worth it.

So, if you prefer to avoid spending e.g 1 month on adjusting and testing just to keep using e.g 1.x more expensive hw, then it is your loss in long run


The problem is I don’t know how much time it will take to make it work, or if it’s even possible to make it work for my specific situation.

I’ve wasted enough of my time trying to debug AMD and Graphcore chips to fall into this trap again.


Legal | privacy