Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Nvidia hits $3T market cap on back of AI boom (www.cnbc.com) similar stories update story
15 points by paulpauper | karma 43782 | avg karma 3.33 2024-06-05 19:43:34 | hide | past | favorite | 155 comments



view as:

Five years ago, Jensen Huang was worth $3 billion. Now he's worth sub $100 billion. Must be nice to go from Scrooge McDuck to ruler-of-the-universe level.

1- https://www.forbes.com/sites/dereksaul/2024/05/28/nvidia-ceo...


This wasn't a quick success ...

Jensen started Nvidia 31-years ago.

https://en.wikipedia.org/wiki/Nvidia


I can’t believe I didn’t know until today that Nvidia remains founder lead.

Which big companies in Silicon Valley are still led by their founders?

https://www.wealthfront.com/explore/collections/18/founder-l...

What’s not in the list is Oracle.

But in all reality it still is.


I don't know. There are some benefits to be sure, but I figure that above some threshold (and 3B is definitely above that threshold) it becomes difficult to make new friends without worrying about their ulterior motives.

there are some multi-millionaires and billionaires clubs likely

The threshold depends where you live; a former partner of mine moved around the world a lot, and in when she was in Nairobi… well, her anecdotes suggest you got that effect from what Americans would consider a middle-class income.

>>>it becomes difficult to make new friends without worrying about their ulterior motives.

Isn't the point of being wealthy that you can buy friends, that this is exactly the sort of relationships the wealthy prefer?

It's not like anybody is forcing him to keep the money or the shares.


I'm sure any billionaire has that issue. The biggest plus to $3B vs $100B is you can sell stock and buy all your fun toys without risking loss of control of your company.

There are claims that Nvidia will go bust like how Cisco did, but I think Jensen Huang knows what he is doing.

https://x.com/all_in_tok/status/1796602545055342784


He's selling shovels in a gold rush, which is a solid plan.

I'm getting very weary of reading this analogy, especially when it's given no thought whatsoever.

Did you know? Every gold rush in history has ended.

If you have a shovel factory that burns millions of dollars a day to sell as many shovels as you can, and you're not prepared for people to give up on digging for gold, you're going to very suddenly start burning your profits.


The main threat to Nvidia is the fact that big clients might start producing their own chips (like Apple is doing and MSFT has started). And they will start offering them to their users who is now leveraging Nvidia's solutions.

And AMD and Intel might wake up from coma too.

Though Nvidia is trying to pivot towards more cloud and software solutions. Let's see.


> The main threat to Nvidia is the fact that big clients might start producing their own chips (like Apple is doing and MSFT has started).

What shovels are Apple and MSFT buying to make their own chips?

I mean, who is making Apple's and MSFT's AI chips? I'd buy their stock.


Apple is using TMSC and MSFT is building fabs with Intel (not sure if they are using TMSC).

I don't think many people, Nvidia shareholders included, expect this to last forever. But you make hay when the sun is shining; if VCs will pay hand-over-fist for AI-native products, then you better damn-well make sure they can buy the most expensive solutions. If they're looking for gold, you build them gaudy excavators to work in.

That being said, Nvidia is particularly well positioned to continue reaping the reward of serving the high-performance-compute market. CUDA is yet-unparalleled, and NPUs/TPUs are still struggling to make a case for themselves. If we get another cryptocurrency-style bum-rush on number crunching computers, I don't think anyone except Nvidia will be poised to immediately benefit. General-purpose GPU compute is in short supply on consumer systems, everyone wants the cloud and the cloud wants Nvidia.


Well, people don’t like the “other” analogy - the “iPhone gold rush” basically never ended.

https://cdn.statcdn.com/Infographic/images/normal/7800.jpeg

When you invent a whole new class of device, of which you have the best and market-leading implementation, often that class device isn’t going to stop being useful, and if you are good at your job then you often won’t have to relinquish having a large share of the resulting market.

There’s really a whole little epistemological bubble that’s forming here, with the people who insist that everyone else must only like it because they think it’s general AI, therefore there must be a collapse coming, because it’s impossible there’s any real value delivered, etc. And the alternative hypothesis is… you’re wrong and there’s something here, and people don’t particularly have blinders on about the potential anymore than you do, they just see the value where you don’t. It’s the same group of people who were butthurt about IP infringement and deepfakes and all the rest who are the most absolutely sure everyone else is an idiot and it’s all a big bubble that’s going to come crashing down. It’s become an article of faith that’s stronger and more real than the fake sentiments it purports to oppose.


David Sacks is not an expert on gpu architecture.

But anyone can see nVidia is ahead in GPU & software stack. They are 1 step ahead and that is all is necessary.

Terrible analysis.

He claims that Cisco lost all its value because their HW was commoditized. I mean in some ways, sure, but the fall in value was almost entirely driven by the DOTCOM bubble bursting.

There is also a claim that networking equipment is easy, "it's just moving data around". Moving data around is what a computer does. LLM's and GPU's are just moving data around.


Question for HNers: Do you think that AI is having a big enough impact already to generate this valuation? Is it changing your life?

And do you think all of Nvidia’s customers throwing money at data centres full of GPUs will cause the current generation of AIs (LLMs etc) to deliver greater and greater returns, sufficient to make Nvidia worth this current valuation and more?

WalMart’s market cap is $540billion. Would you rather own one Nvidia, or WalMart six times over?


Yes. For the last 20 years. Many of the largest applications aren't ones you're aware of or ever see directly. Most of Google's revenue, to pick one example. For every flashy start-up or cool open source project, there is an army of people quietly delivering 10% relative improvements that drive $100M revenue lift in their particular niche at a huge company.

Right. In 2011 Jeff Dean ran into Andrew Ng in the Google cafeteria, which AFAIK is where a lot of the modern Google AI started happening. AlexNet won the ImageNet challenge in 2012 (which HN was not impressed with at all at the time - https://news.ycombinator.com/item?id=4611830 ). Attention is All You Need was published in 2017. A lot of behind the scenes, invite-only etc. stuff has been happening for years, and only in the past two years do we see DALL-E, Stable Diffusion, Midjourney, ChatGPT, Gemini, Sora etc.

> Most of Google's revenue, to pick one example. For every flashy start-up or cool open source project, there is an army of people quietly delivering 10% relative improvements that drive $100M revenue lift in their particular niche at a huge company.

this is true for previous generations of deep learning but still not clear for current GenAI wave.


The market is "pricing in" this future potential.

With Walmart, it's hard to speculate what ground-breaking future potential they might have that isn't already known.


> The market is "pricing in" this future potential.

or short gain opportunity with goal to quite before crash.


That’s kind of my point in framing the question like that - we know WalMart and how ubiquitous it is (in America) and how it makes money, and it’s “moat” so it’s not a sexy business, but a totally solid one. So is what the market is pricing in really becoming silly now? I think Nvidia has a huge competitive advantage but I also think it could be undone within 3 years or so given how much money is now at stake. The other side of that is that I wouldn’t want to try and build a better WalMart inside three years.

My ultimate worry is that Nvidia is becoming so overcooked now that it could pull the whole of the Nasdaq down when it blows, and that won’t be good for any of us.


> My ultimate worry is that Nvidia is becoming so overcooked now that it could pull the whole of the Nasdaq down when it blows, and that won’t be good for any of us

It's already happened the other way.

50% of all S&P 500 growth comes from just 5-companies (Nvidia).

Or said differently, 495 other companies can only match what just 5 companies contributed (which Nvidia is 1 of those 5).

https://finance.yahoo.com/news/meet-5-stocks-contributed-alm...


This is not news: Focusing on aggregate shareholder outcomes, we find that the top-performing 2.4% of firms account for all of the $US 75.7 trillion in net global stock market wealth creation from 1990 to December 2020. Outside the US, 1.41% of firms account for the $US 30.7 trillion in net wealth creation.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3710251


The market is pricing what is pricing.

If the ability of manufacturing GPUs at Nvidia capacity will be worth three trillion dollars is something yet to be seen, as any other good being sold.


> Do you think that AI is having a big enough impact already to generate this valuation? Is it changing your life?

I'm actually quite bearish on AI on the very long term (and as a trained philosopher, I don't buy into the whole AGI nonsense), but I do think AI will redefine how we interact with our computers/devices in the short-to-medium term. These days, everyone's too enamored with the generative aspect (which is, for the most park, bunk), and for whatever reason ignoring the way more value-generating aspects (synthesis, categorization, summarization, validation/invocation, etc.).


Funnily enough my bachelors thesis was “Wittgensteinian problems for Artificial General Intelligence” so I also have grounds for being bearish, even though I respect what is happening at the moment and have hope for the future.

Is the Wittgensteinian problem that the AGI wouldn't understand concepts like we do, or is the problem that it wouldn't be able to use them better than a human?

There’s a few different ways one can look at it if trying to take lessons and ideas from Wittgenstein. I didn’t do a brilliant job of the thesis to be honest… but I got a few things on an intuitive level from trying to grok a bunch of Wittgenstein that left me feeling like there was a serious problem in our attempts to build an AI.

One of them is that we only have language with which to build an AI - language and symbols. Yet these nevertheless capture absolutely nothing about what it is to perceive and understand, on a human level, so yeah. It’s kind of what Searle was getting at - symbolic outputs and translation based on rules doesn’t get you intelligence, even if it produces a system that looks as if it is intelligent.

There’s also that we have an intersubjective attunement to others that is pre-linguistic. Ultimately “meaning is use” which is a pithy take on later Wittgenstein- there’s nothing in language which means anything at all. So it’s not that it wouldn’t be able to use language or concepts better than we do, it’s that we change with the AI and don’t accept them - particularly as they start to become uncanny or unpredictable.

There really is something to sentience that we all know that cannot be reduced to language or machinery.

Our language about machinery, and intelligent machinery, is full of usage mistakes and categorical errors that we overlook, but which are more serious when analysed.

We are complete beings - my intelligence is part of a system, and it makes no sense to abstract it out of that system in order to nail it down.

“Pain” cannot be reduced down to or captured in language. Suffering is part of our intelligence and we’d be nowhere without it.

So ideas kind of like that and more.

AGI would have to suffer to become general enough to warrant the use of the word general.


>AGI would have to suffer to become general enough to warrant the use of the word general.

Jo Cameron doesn't have general intelligence? This seems absurd. Intelligence is orthogonal to phenomenology and affective states. People aren't worried about AGI because it might have a Cartesian theater, the worry is that it might be more competent than humans and put humans out of a job. The semantics of whether it "truly" has intelligence is irrelevant.

>Suffering is part of our intelligence and we’d be nowhere without it.

It might be part of our intelligence, but why would it need to be part of machine's intelligence? GPT4 is already beating humans at theory of mind tasks[1], and I doubt that it suffers. Our suffering is an evolutionary stroke of bad luck. It has nothing to do with intelligence itself, and it would have been better if we had evolved some other way that didn't need it.

[1] https://www.nature.com/articles/s41562-024-01882-z


If LLMs are beating humans in theory of mind tasks, your theory of mind is incorrect.

I’d not heard of Jo Cameron but one outlier, for me, wouldn’t hyper-negate the whole idea out of existence. A huge amount of what I suffer with is not physical pain.


> A huge amount of what I suffer with is not physical pain.

It's not just physical suffering that she's supposedly immune to. She's also immune to all psychological suffering as well. And she is married, has kids, and is perfectly healthy. If she's in fact the real deal (I don't know this myself, but I haven't seen anyone debunking her) then dismissing her as an outlier would be a remarkable form of complacency. If the effect can be understood and replicated, suffering will become fully as needless as it deserves to be, with its final eradication mourned probably as much as the death of smallpox.

> If LLMs are beating humans in theory of mind tasks, your theory of mind is incorrect.

Or, LLMs have better theory of mind than most humans do, which is the finding of that study. Is it metaphysically impossible? If your mental image of how LLM cognition works is the same type of expert system that Searle was writing about back in the day, it's refreshingly bizarre to read how LLMs appear to work:

https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...

Where is the Chinese Room in this? I don't see one. Just a lot of complex and vague conceptual associations mediated through neural connections. Whether or not these models are conscious or have an inner life, they seem to be doing just fine understanding concepts.


Ultimately my thesis was “problems for AGI” not “why AGI is impossible” and for good reason.

>There really is something to sentience that we all know that cannot be reduced to language or machinery.

>We are complete beings - my intelligence is part of a system, and it makes no sense to abstract it out of that system in order to nail it down.

The more I read rebuttals of these kinds, the more I think AGI believers might be right.


[delayed]

> Can you elaborate on why you believe AGI is nonsense?

John Searle is much smarter than I am and he elaborated plenty on this in the 80s. (But more saliently, I think it's absurd to think our brains, or brains in general, do polynomial curve fitting in any meaningful sense.)

> Summarization is one of the most common applications of LLMs I see.

This is true, though the workflow is extremely cumbersome.

> People are doing exactly what one would expect them to do.

I don't disagree, I think people are moving in the right direction (albeit slowly). Maybe the generative aspect is just more awe-inducing rather than "oh hey this thing creates calendar events out of any date," even though the latter would be way more useful.


> But more saliently, I think it's absurd to think our brains, or brains in general, do polynomial curve fitting in any meaningful sense.

Why would an AI need to use the same mechanisms as a human to think? Most machines we create are quite different in character from their natural counterparts (what bird has a propeller?)


Searle's arguments aren't that broad. If you're thinking of the Chinese Room, the point there is simply that computation isn't understanding. It's not a claim that AGI is nonsense in principle, and in fact most of these older arguments are narrowly scoped to GOFAI. Of course nobody serious thinks LLMs have minds, either, but unless you believe in ghosts it's hard to make a case that human-like AGI shouldn't ultimately be possible.

> unless you believe in ghosts it's hard to make a case that human-like AGI shouldn't ultimately be possible.

Thanks for chiming in as I'm completely unfamiliar with Searle.

I don't believe in immaterial human souls, therefore self-aware AGI appears to be a near inevitability of technological advancement from my perspective. To suggest that it's impossible is, in my opinion, absurd.


> unless you believe in ghosts it's hard to make a case that human-like AGI shouldn't ultimately be possible

The argument that any non-materialist position (panpsychism, substance dualism, monadism, etc.) must "believe in ghosts" is comically reductive. Metaphysics is a thing, you know. So is moral philosophy. I mean, heck, so is logic. In all these subfields, arguing that their respective first principles are purely materialist is an uphill battle. Materialism is basically logical positivism 2.0 (and we know how that ended).

> It's not a claim that AGI is nonsense in principle

You're playing a semantic game here; but in any case, understanding is a function of intelligence, so (working backwards), you're actually shooting yourself in the foot, anyway.


> The argument that any non-materialist position (panpsychism, substance dualism, monadism, etc.) must "believe in ghosts" is comically reductive.

"Ghost in the machine" is from Ryle. It's quite apropos here.

> You're playing a semantic game here; but in any case, understanding is a function of intelligence, so (working backwards), you're actually shooting yourself in the foot, anyway.

I honestly don't know what point you're trying to make.

To recap, you said that:

1. You don't believe in this AGI nonsense.

2. Searle elaborated on why AGI is nonsense in the 80s.

And I am clarifying for you that:

1. Searle did not elaborate on why AGI is nonsense in the 80s. He doesn't claim that AGI is nonsense at all.

2. His arguments are specific to the methods that were called AI, and a bunch of technical and philosophical claims about those methods and about the human mind, at the time he was writing. Ditto for Dreyfus.

What you can get Searle to commit to is the idea that human-like AGI can't be a program executed on computers as we think of them today. He's an ultra-materialist, like Ryle, and thinks the physical reality of brains is essential to the minds they produce. To make a mind, you need to make a brain. (In other words, he thinks "Lena" is nonsense: https://qntm.org/mmacevedo.)

So he would say that ChatGPT is no closer to being human-like AGI than ELIZA was, since it's no closer to having something like a human brain, and if all you meant by "the whole AGI nonsense" is that ChatGPT isn't a step toward human-like AGI, then he'd agree. But he wouldn't say that human-like AGI itself is nonsense, precisely because he doesn't believe in ghosts.


Just to clarify, I disagree that materialism is the one true way (re: your "believe in ghosts" quip), even though I'm quite aware that Searle is not a cartesian dualist (which is why I made the comment about what our "brains do"). Philosophy of mind-wise, I'm much more in line with folks like Chalmers, though not fully bought in there, either.

And yes, by "AGI nonsense" I meant that "ChatGPT isn't a step toward human-like AGI" since the main chatter these days is about how ChatGPT/Claude/etc. is the harbinger of AGI. I think Searle's argument is particularly strong because you don't even have to believe in something "spooky" happening in our heads (which, to be clear, I do). In fact, I'd probably extend that to "AGI is flat out not possible" but that's more of a hunch and the full argument invokes things like the non-computability of various physical phenomena.


Searle only argues against artificial consciousness not artificial intelligence.

Are you thinking of Searle's Chinese Room or another famous thought experiment? Is your issue that an AGI wouldn't really "understand" concepts, despite being better at using them than a human?

>and as a trained philosopher, I don't buy into the whole AGI nonsense

It's hard to imagine Socrates using this form of argument.


Yeah but Socrates mostly just spent all day convincing people they were stupid.

I tend to admire that Socrates used arguments in the process of that, unlike both of us right now.

I really dig Socrates, and his arguments. I also dig Nietzsche for pointing out “he was a buffoon who got himself killed”. Dialectics is useful up to the point you start going round in circles and that is as far as most ever get with it.

Well, if we're taking the stories at face value, Socrates knew what he was getting himself into when he chose to die, even when he could have easily escaped. He died to prove a point. If Nietzsche thinks this is silly, it's one of those times he's at odds with his own philosophy.

> To die proudly when it is no longer possible to live proudly. Death of one's own free choice, death at the proper time, with a clear head and with joyfulness, consummated in the midst of children and witnesses: so that an actual leave-taking is possible while he who is leaving is still there

(Twilight of the Idols, 1888)


Nietzsche thought that it was so obvious that he was going round corrupting the youth by showing people not to have a clue what they were talking about via using dialectics, that he knew what he was getting himself into long long before the whole “dying to prove a point (that he respected the laws of the Athenian state)”. He was purposefully using dialectics as a tool to show people up, and to show them “hey you thought you knew what virtue was?? Turns out you don’t know shit about virtue omfg?!?!” Whereas people did all know about virtue because it was a common thing they shared in their understanding of using the meaning of the concept. It didn’t need dialectics to reduce that definition to absurdio and prove that nothing conceptually was true or known in the Athenian state, and him doing this was causative of its downfall, and so he was no martyr for drinking the hemlock.

That really is Nietzsche’s angle on it if you care to go looking rather than cite him at me to disprove what I said. I think it’s in Will to Power but it’s been a while. You’ll also notice throughout that Nietzsche isn’t a dialectical thinker. He doesn’t go round in circles trying to find the antithesis of his polemics – he’s calling it how it is for him.

Edit: some of his perspective on Socrates is in Twilight - Google: Nietzsche Socrates Buffoon.


I suppose it depends if you are trying to optimize for short-term wealth, or long-term staying power.

AI does not, either collecting/processing data does. The valuation has many factors not necessarily related to the actual impact.

>Do you think that AI is having a big enough impact already to generate this valuation? Is it changing your life?

No, but in general I don't think anything justifies that valuation with 60 billion in annual revenue. Someone pointed out that Nvidia's market cap is now comparable to the entire German or Japanese stock market and I'd rather have an economy the size of a few trillion. American stock prices look like Japan in the 80s when the land value of the Imperial Palace was as large as high as California's economy.


Yeah a decent summary of my thoughts too. The bulls running away with things online are starting not to listen to the point that it’s concerning me that Nvidia is turning into a free money machine, and no good ever comes of that kind of thing long-term.

And as you point out the revenue to market cap seems like we’re getting silly. It’s even making me wonder whether Jensen and his brains trust are going to start massaging their own figures, because any less-than-stellar results next quarter could really kill the party. And if Nvidia really is turning into a bubble, it could pop the whole of the Nasdaq for a while.


> Do you think that AI is having a big enough impact already to generate this valuation? Is it changing your life?

Yes, but the thing is: it takes time to write the software. We had proof of concepts for some serious game-changing things written and working over a year ago, but organizationally it takes a long time to write and deploy the things and get them into production. Even if LLM's didn't get any better than they are right now, there are still years' worth of ground-breaking improvements we could roll out but those will take time to write and deploy the software and reshape the business processes.


Running into this same problem, the tech is progressing WAY faster than the org can transform to use the tech. I'm betting we end up transforming by aquiring a company that completed the transformation before us, and letting them run the show. The downside is I see a lot more job destruction that route vs the slow route we're currently on.

> Running into this same problem, the tech is progressing WAY faster than the org can transform to use the tech

It has always been the case. As a society we are still very far from having harnessed the full power of 90s era computers. Companies routinely pay hundreds of workers and even executives, to do things that computers are better at doing. For instance: managing plannings in big organizations.


Recently I was in the accounting office of my mom’s large senior-oriented NYC coop. Around a thousand apartments total.

It was out of an old movie. Folders and coffee cups absolutely everywhere. Walls of well-used filing cabinets. The directors office was comical, you had to look over the piles if you were sitting. 15 year old computers were there but second fiddle.

Honestly it was charming and made me nostalgic. It was also a reminder of exactly what you’re saying. Millions worth of NYC rentals run by a paper-driven org. The future is distributed unevenly indeed.


This, so much this. I feel that society as a whole is eeking slowly into 80-90's level computing, but it is completely asymmetrical. Some parts of society are hyper-evolved, like social media and entertainment, but others are stuck in earlier phases like your example. In that case even 70's level tech and theory would be an improvement.

CS and educated tech people in general are in a hyper-educated bubble where it is normal to talk about distributed systems and algorithmic complexity, but there are an awful lot of smart people without those backgrounds.

A lot of knowledge that's traditionally branded as "technical" is IMO generally useful, but overlooked. For example basic queuing theory in of itself is exceedingly useful. Nothing fancy, just the steady-state, blocking probablity, what chance of how many "clients", etc.

I witness very large (gov) organisations lacking any kind of in/out flow modeling, statistics, etc. No dashboards, no insights. Just going by feel. Everybody is "overworked", but if you dare to say it might be a good idea to start quantifying some of those parameters you'll be in a very silent, very awkward room and will never be asked anything ever again. I wish I was kidding.

AI? Let's start previous-century level process automatization and professionalization first and we'll see in a few decades if we're anywhere near ready for the next step.


> Do you think that AI is having a big enough impact already to generate this valuation?

Yes. The impact is hype and the valuation comes from other big tech companies like Google and Microsoft

> Is it changing your life?

Not really. I barely use it anymore except when there’s no internet (phi-3 was useful as an unreliable alternative to stack overflow while on a flight)

I think Nvidia is a solid company with an overblown valuation that will eventually stabilize lower but maintain its status as a major “tech” company.

> Would you rather own one Nvidia, or WalMart six times over

Walmart has many competitors and is generally easy to replace, especially when considering the international market. Nvidia is somewhat unique in the world, so yes, Nvidia should probably be valued more


Value, no. Hype, yes. This is the Mr. Market theory playing out in real-time.

Long-term, the value of AI (LLMs) as a productivity tool is massive, but if the delusional thinking around things like AGI continues (and the fear-mongering of Ai Is ToO DaNgErOuS), the major players will suffocate that potential and be humbled.


No, it's not changing my life, it's just impressive ML rebranded.

It hallucinates all the time... It's a toy. Impressive, but still a toy.


There will be a pullback after the initial irrational exuberance, and the big spenders on Nvidia cards will pull back on that spending as the expected returns are not realized (or not fully realized).

Variations of this story repeat over and over and over again throughout the decades of tech.

After the pullback, given some time, AI will realize significant real world value and the overall market will be far larger (15-20 years out). That process takes longer than the exuberance suggests early on.

Nvidia will pull back when that spending decline happens (yes, there will be a drop in their sales, they'll hit a wall and it'll go south for a while). The question is where the wall is at. They're speeding at it like Cisco was during the hype of the dotcom boom (and Cisco was growing at a crazy rate quarter over quarter, accelerating, which is a particularly unsustainable event). Tons of AI customers will die off to go along with the big spenders freezing and or reducing their spending.


Yeah good thinking. Does seem like it’s turning into an analogous situation to Cisco, and I feel like we will actually need a quantum leap on the software side of AI (again) for the hardware sales to sustain the revenue growth.

The market cap is irrational and I submit as evidence Llama 3.

AI won’t need huge amounts of computing resources. It will need moderate amounts of computing resources.

The huge amount of computing resources seen today are the result of immature technology that uses resources inefficiently. As the technology improves, the capability-per-FLOP will increase and AI will move on-device.


AI has seen rapid efficiency improvements every year for the last decade, but the amount of computation used has gone up, not down. The improvements always go into making larger and more powerful models, not into just running the same models but at a lower cost per hour.

https://epochai.org/blog/algorithmic-progress-in-language-mo...


This ^, the models are getting larger and we don't appear to have scratched the surface on Video / Vision models yet. Energy consumption is accelerating along side that at 28% YOY https://www.weforum.org/agenda/2024/04/how-to-manage-ais-ene....

While vision-vision models are certainly cool, I don’t think that they are as economically valuable as vision-speech or text-text. Humans don’t have vision output.

Computation may be increasing, but that is a statement about the short-term not the long-term.

If we want to predict the future then we care about: how many capabilities can you fit on a phone-sized computer? And I believe that the answer is: a lot.


But doesn't it feel like the larger models are seeing diminished returns compared to the smaller open models?

Feels like they will eventually converge unless some big breakthrough happens that only benefits the large models.


Diffusion models still have plenty of room to grow (vision/video is orders of magnitude more expensive and larger) and we're only beginning to experiment with agent (AI -> AI Agent) workflow communication and automation.

Not at all. There's definitely a lot of hype about how "our 7B model is almost as good as GPT-4" etc, but upon close inspection, it turns out to be cherry picked meaningless numbers every single time.

At the same time, there's no evidence that even at the scale of GPT-4 they are seeing diminishing returns. I don't see why that would be the case, either, so long as there's more data to feed into the models. IMO the cliff will occur once the models become large enough that no amount of naturally sourced data is enough to saturate them, but, well, we aren't even at 1% of the Internet yet.


> I submit as evidence Llama 3.

Llama 3 has likely costed billions to Meta and most of it went to Nvidia. What are you talking about. As much as what you think inference costs, it is tiny in comparison to training the model.


Pre-training happens once then it’s done. You can’t make a multi-trillion dollar business out of providing hardware for that. Inference is where the money is because it’s recurring.

No, while what you are saying sounds intuitive on one hand, OpenAI is building $100b supercomputer for training new models. As long as people want to train better models than a year before(which seems to be a good assumption for new few years), training will keep on giving good revenue to Nvidia.

Inference requires 1/3rd the FLOPs than training per token and current models are already trained on entirety of internet. You can generate 3 times the internet for 1 training run.


Are you suggesting that there will be some future, more efficient architecture of machine learning that'll not be able to make use of extra compute at all (or at least have greatly diminishing returns while doing so)?

Otherwise, I don't see why it wouldn't make sense to continue grabbing as many GPUs as feasible for the foreseeable future, at least until LLMs and other GenAI models have been determined to have plateaued without much doubt.


Really? What hardware was llama3 trained on?

now realize there’s 15 other models you haven’t heard of trained on the same brand’s hardware that didn’t work out / weren’t impressive enough to release.

you are literally describing Jevons paradox in action - the more efficient and practical you make the tech, the more of it we will consume. And sure, inference is easy, but training is still uniformly done on nvidia hardware, and the “software” is an incredibly non-trivial moat. AMD has been trying for literally 15 years and they’re on their third or fourth ground-up attempt to displace it… the most recent (ROCm) being over 5 years old at this point and still utterly non-competitive. Leave it to HN to imply that millions of person-hours of ecosystem building can basically be replicated in a long weekend.


The valuation is driven by its huge margins and growth in revenue. It's an inevitable pump sorts.

It can't and won't last. There's just too much margin for players not to compete or design their own chips. The only question for capitalizing is how to time your shorts.


Yep, ripe for a bonanza day of shorting in the not-too-distant future, it would seem.

I think this is a great question and one I keep mulling over. I've been evaluating paid AI products like Co-pilot to try to determine if there is enough general purpose value there to justify it's cost.

At $360 a year, the productivity benefit doesn't need to be huge to justify the cost, lets say you would probably want to so $500-1,000/yr in expense reduction.

I can say for certain that I have found roughly enough benefit myself in the past year for various tasks such as writing, presentations and the like. Nothing earth shattering mind you, but time has been saved. The question is, does this scale across an organization? Does everyone get leverage out of Copilot if you give it to them, or do only a fraction of advanced early-adopters? Also does the benefit really hit your bottom line, which may be harder to measure accurately.

Personally, I think it is a toss-up right now, but I suspect many businesses will place their bet anyway. For a 1,000 person organization $360k isn't nothing but also not their biggest expense or even software expense. Likely a rounding error on revenue.

Most businesses will roll out Copilot and/or similar tools broadly for at least a couple of years before starting to really question the ROI. It is pretty clear that the hype and FOMO on this is real. It would likely be better to be the executive who tried AI and found it wanting then to be the one that waited on the sidelines if it turns out to have a big impact.

The hype cycle is just getting warmed up on this.


Thanks for your answer. It’s useful to think about for sure. I’m just trying to ponder where we are at with it, and whether Nvidia is becoming overblown or whether we’ll need another quantum leap on the software side to get better AI or whether the current generation and approaches are going to keep giving bigger and bigger returns. As you point out, AI really is improving tooling, particularly in coding, and this is worth paying for.

>> At $360 a year, the productivity benefit doesn't need to be huge to justify the cost, lets say you would probably want to so $500-1,000/yr in expense reduction.

You also have to factor in the cost of ai's failure modes. Though it might be difficult to put a number on it.


Yes so it would be more correct to say “net productivity gain”. At which point it’s not as obvious is it?

Right now AI is in the part of the hype cycle where we are in the first knee of the S curve and people are drawing straight lines from that to predict the future. I've seen so much ridiculous hype about AI becoming godlike super-geniuses before the decade is out that I have to wonder if any of these people have tried using AI for real work and not just flashy demos.

It's just like the hype about self driving vehicles. 5 years ago I had bought into the hype and thought that most long haul truck drivers would be out of work by now.

Funny that isn’t it. I wonder if Tesla throwing $4billion of Nvidia GPUs into a data centre will be enough to solve it “this time”.

Self-driving could easily become one of the pins that bursts Nvidias bubble if we don’t see spectacular improvements.


News today (from Nvidia!) was that Tesla isn't getting them, Elon redirected them to his new AI company.

The new AI has the same problem as AI for self driving- you can't trust it.


It's strange alongside, that he's doing it as part of a new company. I realise it's due to his ongoing squabble with Tesla over this $50billion payout he still hasn't had... but can you even imagine the CEO of any other company doing that?

CEO: "Hey, so there's this thing about the product that I've been hyping as the killer widget it'll get, ohh for about ten years now. Well, we still didn't manage to solve the widget problem, so I'm going to start a new widget company that I own so I can sell us the widget."

Shareholders & Board: "You're doing what??!? That's the most illegal thing we've ever heard!"


> I've seen so much ridiculous hype about AI becoming godlike super-geniuses before the decade is out

I’ve actually seen literally none of that from proponents, like I’ve probably seen 100x as much strawmanning and uncharitable summary from opponents. Essentially nobody thinks we should hand the kingdom over to an unsupervised savior machine, like, can you show me a candidate from a major political party (doesn’t have to be US) who supports such a platform? Or a ceo who supports moving leadership decisions for his company over to an AI?

As far as I can tell, that whole thing is a strawman cut whole-cloth from the imaginations of detractors. I’d be delighted to hear counterexamples of the sort I discussed but I frankly don’t think they really exist outside some feverish imaginations.


This chart has been making the rounds recently as an example:

https://x.com/tomhfh/status/1798467961151443382


That chart is insane. Assuming that scaling up compute is all that's needed seems wildly incorrect to me.

This whole buzz does have shades of self-driving cars back in 2016.


> Assuming that scaling up compute is all that's needed seems wildly incorrect to me.

This is literally the strategy that people like Elon Musk are pursuing in their quest for AGI before the end of the decade. It is how he plans to have full self driving solved by August 8 so he can start selling Robotaxi accounts.


I mean, maybe? Certainly scaling up compute will make better text predictors, I am entirely unconvinced that that's all which is needed, but eventually I'll be wrong about the hype I guess.

It's valuable to me. I don't have to fish through search results and synthesize them any more.

On a fundamentals basis, the demand is there for the compute. NVDA will likely get somewhat overbought in this cycle, but at some point here it should follow the same trajectories of the big 7. We don't appear to be anywhere close to an oversupply issue (yet).

AI will help to totally crush all the CO2 emission targets of Google, MS etc.

By crush do you mean it will become sentient and sequester the carbon in the bodies of their meaty workers by killing them all?


> o you think that AI is having a big enough impact already to generate this valuation?

Stock valuation is based on speculation. I don't think I understand the question.


> Question for HNers: Do you think that AI is having a big enough impact already to generate this valuation?

I think that's the wrong question: "impact" is necessary but not sufficient for "valuation", as the latter also requires you to not have much competition driving down your margin, and there's a lot of competition in AI software.

NVIDIA, being the metaphorical shovel-seller, can probably justify current valuations from current or near-term hardware sales… but I'm not sure the people spending the money to buy NVIDIA hardware to train their own models can justify those expenses, which means I'm expecting NVIDIA's sales to collapse when the "not invented here" bubble collapses and everyone becomes willing to use an existing off-the-shelf AI.

> Is it changing your life?

Yes:

1. LLMs by being the easy first step on side-projects I'd otherwise not have time to get into

2. Google Translate in general


Google Translate has been around for years, and runs on regular CPUs fine, so I personally wouldn’t classify it as current generation AI or predicated on Nvidia chips.

Definitely a killer app though. Same as Google maps is a non-AI killer app.


Google Translate was probably the first commercial product to utilize Transformer architecture and also training it absolutely racked up a huge bill to NVidia probably even way back in the day up to now

I'm not sure if that is what OP meant, but one thing that the current crop of top tier LLMs is extremely good at is translation. GPT-4 is good enough for Babelfish-style real time conversation on complicated topics for many languages.

As someone who dabbles in creating fiction as a pasttime, it's revolutionary. I can use AI to come up with details for minutiae that I don't have the time nor energy to conceptualize. There's just too many things that are in my head and not enough capabilities to realize them in a pre-AI world. The AI outputs are garbage as-is, but what they come up with already does 80% of the work done. I can do the rest to make it completely fit my needs.

I would rather own one Nvidia.

The five most valuable companies in the world are, in order: Microsoft, Nvidia, Apple, Alphabet and Amazon. All very tied to software, or the hardware which software runs on. Walmart is worth less than Facebook, Broadcom and TSMC.

If the past few decades have taught us anything, it is that software is a very good business to be in. Microsoft is the most valuable company in the world for a reason.


I don't think we will ever achieve general AI with this. While current AI is impressive, it fundamentally relies on big data and algorithms. The main limitation of big data, which affects AI as well, is the quality of the data and the resulting outputs. Who is curating the data? Who is curating the outputs? It's not magic. Overcoming this barrier may be possible, but I believe it's an issue that isn't being adequately addressed.

> Question for HNers: Do you think that AI is having a big enough impact already to generate this valuation? Is it changing your life?

Imagine asking these questions in the early 90's when "the web" was just getting started. Or when bitcoin was $0.10.

These are the sorts of headlines people make fun of later on, “The internet is just a fad.”

People learned their lessons and are way more willing to make bets on the future. Right now, clearly this is AI.


Imagine asking these questions about the "metaverse" and cryptocurrencies just 2-3 years ago.

Some technologies are great, pioneering even, and full of potential, and still flop in the market.


Poor choice of examples.

Bitcoin at $71k is hardly a flop.

That was a term coined by stephenson during the early web, 1992. Virtual worlds are still extremely popular.


If you want to argue and pretend there weren't tons and tons of companies and technical products tied to both cryptocurrency and VR/AR lately that absolutely flopped despite being technically unique and innovative by cherry picking things that would have been sound investments back then and ignoring the myriad ways you'd have lost money or the fact that Bitcoin has less usefulness as a technology now than when it was hyped, fine

Are you really trying to argue that all companies in a space need to succeed, for the space to be successful?

As for bitcoin itself, people and companies have hyped it for their own benefit, but bitcoin itself isn't a company, and therefore isn't hyped. It is one technology that lives on its own and survives primarily because of the will of the people that choose to participate in it.


No I'm not arguing that. It should be obvious

Correct, so it's impossible to determine the winners and losers in AI.

But no matter what all the companies will do with AI, they will all need AI computing. So it's not so difficult to predict that Nvidia will be a clear winner. Nvidia has had competition for 8 years in the AI space. Their market share today is more dominant than it ever used to be.

Probably, Nvidia has already won because of 5 million CUDA developer who will hardly switch to other platforms. It's basically Windows SDK and Apple/Android SDK reloaded.


> But no matter what all the companies will do with AI, they will all need AI computing.

They may not. Like I said before, there have been many very innovative technologies that did not see uptake by the general public as was expected (VR, cryptocurrency).

AI could be the next cloud computing that every company pays for but which eventually commoditized down to a relatively low margin business. It could be something Nvidia strangleholds and milks money out of for decades with little competition like Facebook with social media, or it could be something people don't reach for as much as we thought they would. We don't know yet


I know what you’re saying but I thought it worthwhile restating the question now that Nvidia is at $3trillion, because I really do care about the answers. I am tech heavy and have been for the past 25 years, and consider myself reasonably well informed… but it seems like AI really isn’t impacting life to the point where I am certain Nvidia is headed for a $10trillion valuation or a $1trillion one in 3 years time.

> it seems like AI really isn’t impacting life to the point where I am certain Nvidia is headed for a $10trillion valuation or a $1trillion one in 3 years time.

How much is the internet worth today?


> Is it changing your life?

According to the mainstream media, AI is supposed to replace me / make me obsolete within 5 years.


I have invested in Nvidia 8 years ago. So if AI becomes huge and replaces me than I'll simply retire on my shares otherwise I'll still have a job. Investing in Nvidia is called hedging your job :)

I think everyone knows that Apple and MS provide a lot more value to their lives and the companies they work for than NVIDIA does. The problem is every tech company and countless startups have promised a very near term revolution to life on earth as we know it, due to AI. Now every single one of those companies is thinking the following:

1. A competitor might actually make the visionary future a reality, and so, falling behind is a corporate existential crisis

2. They’re nowhere close to making their vision a reality. I think companies knowingly continue to sell a healthy dose of BS to their customers in terms of AI capabilities, but I also think they’ve fooled themselves with their own arrogance. For example, Google learning in realtime with the rest of the world that their AI generated search results are sometimes outputting nonsense.

This is why I value NVIDIA over all software companies. I’m confident the latter group has hyped everything so much that most will eventually fail to meet expectations. NVIDIA just gets to sit back and watch it all play out.

I do think they’ll eventually deserve a $3T valuation some years down the line as models continue to improve and some industries truly get revolutionized.


It's ultimately a question of what I prefer vs what I expect to happen.

I would prefer that companies providing utility would be rewarded. I expect , however, that companies enabling administrative control (via AI , software, monopolization, surveilance, market capture) will actually be rewarded.

So in a better world Walmart should be a higher value, since they are feeding and supplying the entire country with food, tools, consumer goods, luxuries, furniture, etc.

But in reality Nvidia will be rewarded for enabling more administrative control.


That's about the GDP of France, I can't see how it makes sense.

Why do people compare market cap to a 1 year GDP?

[dead]

This is actually an interesting way to look at it... Can a whole country produce a company like Nvidia in one year? Or how much it will cost to create Nvidia today from scratch, and how long it would take?

Because it makes an exciting sounding headline that means nothing practically speaking.

Because they're both measured in terms of money, and humans tend to ignore time-derivatives. (The latter point is also why so many mix up energy and power).

Yeah but it’s a useful metric - if everyone in France dedicated all their productive output for this entire year, they could swap that output for a company of a few thousand people that makes microchips.

Nvidia is worth more than Apple now ( https://companiesmarketcap.com ).

It's the second most valuable company in the world behind Microsoft, which is worth about 5% more. I would guess a lot of Microsoft's Azure capex is spend on Nvidia equipment. Windows and Intel? Azure and Nvidia!


Cant wait to short this stock!

Good luck with that.

It cant go on forever.

It can go on longer than many of us can afford to keep shorting it.

Yeah but the growth happens over months - the shorting will be done in a couple of intense days.

Good luck timing it.

Shorting because it's dropping and then covering when it's going up how you end up losing money. By the time you see the short opportunity, or the opportunity to cover, it's too late.

It's just like buying because something is going up and selling when it starts going down. It's a recipe for buying high and selling low.


Sure is. Nvidia buyers right now couldn’t be buying any higher though, and they’re still buying. I get a feeling like the shorting opportunities will be apparent when they present themselves. I’m not looking to put my house on shorting it down any time soon though, but I’ll be surprised if there isn’t some big corrections and market pain in the next 12 months the way this is going. And if not 12 months, then the results day where Jensen doesn’t report a beat-the-street record quarter is going to be a bad day for the price, whenever that day comes.

This is what people have said about TSLA for like...a decade? And there have been only maybe 3 or 4 good times to short the stock, all obvious in retrospect.

Yeah but which direction is the Tesla price heading at the moment now the hype has died off?

I suspect TSLA is about to have another run here in the next year or so.

If they genuinely solve self-driving, then it seems like it. Tesla three years ago didn't have any competition and now they do, so big differentiation is needed, and FSD would be it.

Based on my experience with it a couple months ago when they gave everyone a 1-month free trial of it, Tesla is close in that it handles 95% of situations perfectly. But it's still incredibly far away because that last 5% is going to take many more years.

There were some parts that genuinely surprised me in how good it was. For example, I drove through a construction zone where there were cones on the ground diverting cars out of the painted lines on the road, and FSD followed the cones and drove over the lines without complaint. It handled my neighborhood road that is kind of narrow and has cars parked on both curbs while going around turns flawlessly.

That said, when making turns without a traffic light, it's SUPER cautious, enough to absolutely infuriate anybody driving behind you. And it STILL struggles sometimes when one lane turns into two. It will ride the middle until it's basically driving on top of the line, then SWERVE into one of the lanes.


I didn't experience any major safety concerns when using the free self driving trial, but I ended up only using it on completely empty roads because I was too embarrassed about how cautious it drove.

Groq's chips are 10x faster and 100% US-sourced. Nvidia has existential threats afoot.

I don't see our compute appetite going anywhere generally. But 90% of that will be for AI compute. Nvidia without that is, well... rekd.

(I'm an ignorant fool just reflecting what I've seen. If someone has better perspective, please enlighten me.)


Groq no longer sells chips.

Compute appetite is infinite.


Can't wait for the $2000 MSRP RTX 5080.

The GTX 1080 (one of the greatest GPU of all time) was $599 in 2016 ($782 today)

The RTX 4080 was $1199 in 2022 ($1284 today)


But the 4080 Super which is faster than the 4080 is $999...

Which was released this year

Sure you can always wait for ever, there will be a 5080 Super one day too


The only real impact that I have noticed from AI is that it's been enshitifying things on the Internet more and more. And it seems like it's going to get into things not on the Internet more and more too.

Nvidia's value is in its software. CUDA is where the money is and why their chips sell. On paper AMD chips are just as good, sometimes better, but AMD/Radeon software and drivers are awful and have been awful for well over 20 years. That is probably moated for a few years still. I do think the stock is grossly overvalued, but that doesn't always matter, it can still go up. I would be surprised for it to surpass Microsoft and then go on a breather.

The reason we use Windows so much on PCs still today is that MS was first with a SDK for PC applications. Simple as that. It's the same reason why we have no Windows on mobile phones. MS was too late and Apple/Google already won the market with their SDK. It's not like MS can't do SW but it's hard to persuade developers to develop for your platform if your platform is <10% share compared to competitors.

But people are expecting exactly this from AMD to make happen. AMD is a HW company. You can clearly see it in their roadmaps, in their mindset and in their communication. If you look at Nvidia and their publications then it's 80% about SW solutions and only 20% HW. Even GTC 2024 where Blackwell was released, was >50% about SW solutions presentation.

However, AI competition will be won with SW, not with HW. Enterprises need consultancy, guidance and help in properly training their data and putting it to use. Buying HW is pointless for them because they have no clue about AI. They first have to learn so they need consulting. This is a job for SW companies, not for HW companies. And interestingly, Nvidia is right there at the front and has not only direct engagements but partnerships with all large IT consulting companies.

I wonder, which HW would a company deploying Nvidia Enterprise AI SW suite use? AMD maybe?


In COMPUTEX Nvidia's CEO also mentioned that, companies and countries are partnering with NVIDIA to shift the trillion-dollar traditional data centers to accelerated computing and build a new type of data center — AI factories — to produce a new commodity: artificial intelligence

> Refering - https://news.ycombinator.com/item?id=40593858


Legal | privacy