Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I'd also add that what Crysis did was pretty typical at the time. It was an era when new computers were a bit dated in a few months, and then obsolete in a couple of years. Carmack/ID Software/Doom was a more typical example of this, as they did it repeatedly and regularly, frequently in collaboration with the hardware companies of the time. But there was near zero uncertainty. There was a clear path to the goal down to the point of exact expected specs.

With LLMs there's not not only no clear path to the goal, but there's every reason to think that such a path may not exist. In literally every domain neural networks have been utilized in you reach asymptotic level diminishing returns. Truly full self driving vehicles are just the latest example. They're just as far away now as they were years ago. If anything they now seem much further away because years ago many of us expected the exponential progress to continue, meaning that full self driving was just around the corner. We now have the wisdom to understand that, at the minimum, that's one rather elusive corner.



sort by: page size:

A lot of people seem to take the rapid improvement of LLMs from GPT-2 through GPT-4 and their brethren, and extrapolate that trendline to infinity.

But that's not logically sound.

The advances that have allowed this aren't arbitrarily scalable. Sure, we may see some more advances in AI tech that take us a few more jumps forward—but that doesn't imply that we will keep advancing at this pace until we hit AGI/superintelligence/the singularity/whatever.

I've seen several people compare this logic to what we were seeing in the discussions about self-driving technology several years ago: some very impressive advancements had happened, and were continuing to happen, and so people extrapolated from there to assume that full self-driving capability would be coming to the market by...well, about now, actually. (I admit, I somewhat bought the hype at that time. It is possible this makes me feel more cautious now; YMMV.) I find this comparison to be persuasive, as it touches on some very similar improvements in technology. I believe that we will see ML advancements hit a similar wall fairly soon.


LLMs and Image AIs are the opposite of self-driving cars. "Everybody" had concrete expectations for at least half a decade now that the moment where self-driving cars would surpass human ability was imminent, yet the tech hasn't lived up to it (yet). While practically nobody was expecting AI to be able to do the jobs of artists, programmers or poets anywhere near human level anytime soon, yet here we are.

I have no idea whom this guy is, and have no thoughts here on his opinions or their falsifiability (happy to agree with you on that aspect if it matters), but just to reply to this particular point here:

> This is well beyond what was considered possible at the time. We're certainly not at 'everything' but shrug. Maybe in ten years.

This feels a lot like what people were saying about self-driving cars being imminent circa... 2015 or so? [1] The skeptical folks rolled their eyes at suggestions that we'd have self driving cars everywhere in a few years, but lo and behold, they were right. Just because we had a massive amount of progress in the years before, that doesn't mean we were on the cusp of achieving the goal. Turns out going from 99.9% accurate to 99.99% accurate (or whatever the numbers were) is harder than all the believers wanted to admit.

This feels like the same thing all over again. Yes, there's been a lot of progress in LLMs. No, that doesn't mean we're anywhere close to deep learning being able to do 'everything' in 10 years, whatever that means.

[1] https://www.theverge.com/24065447/self-driving-car-autonomou...


I suspect a part of the problem is the they underestimated the amount of compute required.

The vehicles they’ve already shipped have about the equivalent tensor compute as an NVIDIA V100, and much less than that for older model cars.

The Bitter Lesson is that self driving AIs will probably need large scale more than anything. If we look at the capabilities of something like GPT-4V, it’s a good sign of what’s needed as the baseline minimum.

By extrapolating industry trends, I suspect that a future GPT 6 or 7 level AI with true video input might be able to be specialised to perform real time driving with human-level capability to parse complex situations such as construction sites or accident scenes.

That level of AI inference would require at least an order of magnitude more compute than what’s in any shipping Tesla, maybe more.

Elon promised something that will be possible, but not with the hardware of they day.

When Elon made his comments about “every Tesla shopping now has FSD hardware” he may have just been honestly optimistic. This was before GPT2, let alone the current state of the art. I can forgiven him for that, but not for doubling down on statements that are clearly no longer valid.


At the risk of showing just how ignorant I am of current state-of-the-art AI, it feels like the biggest block with self-driving cars is how short their memory is and how little high-level logic is implemented. The computer doesn't feel like it's driving the car as an activity so much as simply responding to inputs as they come into sensor range, sometimes correlated, frequently not.

The point is to try and see if LLM's wide general knowledge can have advantage in something like sensory data + action learning as well. Current self driving models don't have that.

I'm not talking about self driving at all. I took your earlier comment to mean that the you thought the future of avionics control and other classical embedded problems would be using DNNs. Maybe that's not what you meant.

And that seems to be the limits of ML. We might eek out self driving cars, but I don't think we will get much more than that. It is pretty significant, but still limited compared to general purpose AI.

It’s also impossible to imagine correctly. Back in 2009, I was completely convinced that we’d be able to go into a car dealership and buy a brand new vehicle which had no steering wheel because the self-driving AI was just that good, within 10 years. Seemed reasonable at the time on the basis of the DARPA Grand Challenge results, but even 13 years later, it didn’t happen.

This is how I feel about AGI too, and I also include self-driving cars. I don't think those are just around the corner either.

In general I don't think our current approach to AI is all that clever. It brute forces algorithms which no human has any comprehension of or ability to modify. All a human can do is modify the input data set and hope a better algorithm (which they also don't understand) arises from the neural network.

It's like a very permissive compiler which produces a binary full of runtime errors. You have to find bugs at runtime and fiddle with the input until the runtime error goes away. Was it a bug in your input? Or a bug in the compiler? Who knows. Change whichever you think of first. It's barely science and it's barely a debug workflow.

What pushed me all the way over the edge was when adversarial techniques started to be applied to self-driving cars. That white paper made them look like death machines. This entire development process I am criticising assumes we get to live in the happy path, and we're not. The same dark forces infosec can barely keep at bay on the internet, and have completely failed to stop on IoT, will now be able to target your car as well.

Worst thing is all our otherwise brilliant humans like Carmack are gonna be the guinea pigs in the cars as they head off toward their next runtime crash.


AI or ML? I think AI is when it started reasoning, as HAL9000, we are far away from that. Thats why cars cannot self-drive yet.

Typically from what I've seen computers go a long period being hopeless at a task, then slightly subhuman-par-superhuman very quickly. It seems to me that as far as algorithm-and-Hz cares human intelligence lives in a very narrow window and once computers get close to it they tend to jump over it.

I'd judge self driving to be slightly subhuman right now - there are definitely worse drivers on the road (typically impaired - drunk, near-blind or high). I'd expect superhuman performance this decade just based on that and the rate of improvement in anything AI related right now.


I've been watching Lex Fridman's youtube podcast and there is a recent interview with Jim Keller [1]. Keller is a chip designer famous for his involvement in multiple chips at Intel, AMD, Apple and he was co-author of the x86-64 instruction set. He also worked for Tesla.

There is a point in the conversation where Lex and Jim clearly disagree about how "easy" self-driving AI should be. Lex is clearly pessimistic and Jim is clearly optimistic. I have to admit I was more swayed by Lex's points than by Jim's, but it is hard to discount someone so clearly (extraordinarily) expert and working directly in the field.

1. https://www.youtube.com/watch?v=Nb2tebYAaOA


And this is why I'm much less pessimistic than most about robotaxis.

Waymo has a working robotaxi in a limited area and they got there with a fleet of 600 cars and mere millions of driving data.

Now imagine they trained on 100x cars i.e. 60k cars and billions of driving data.

Guess what, Tesla already has FSD running, under human supervision, in 60k cars and that fleet is driving billions of miles.

They are colleting 100x data sets as I write this.

We also continue to significantly improve hardware for both NN inference (Nvidia Drive, Tesla FSD chip) and training (Nvidia GPUs, Tesla Dojo, Google TPU and 26 other startups working on AI Hardware https://www.ai-startups.org/top/hardware/)

If the bitter lesson extends to problem of self-driving, we're doing everything right to solve it.

It's just a matter of time to collect enough training data, have enough compute to train the neural network and enough compute to run the network in the car.


The distance between "sort of works" and "works" for AI is considerable. Not infinite.

Look at self-driving cars. The first tries were in the late 1950s, with GM's Firebird 3, guided by wires in the road. By the 1980s, the first self-driving vehicles were moving around CMU, very slowly. By the early 1990s, experimental highway driving had been demoed. In the early 2000s, we had the DARPA Grand Challenge, which had off-road driving on empty roads working. Then there were a few experimental self-driving cars that sort of worked on general roads. Many startups, most went bust.

Today you can take a driverless cab in San Francisco. 64 years since GM's Firebird III. (Which still exists, in driveable condition, in GM's in-house collection.)

It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.


That's the feeling I got from the video too. Maybe he tried too hard to make it appear as 'this is not as hard as big corps say it is!' but it also felt like 'hey, ML + basic CAN controls = self driving !'.. then I disagree. I want a computer with some general knowledge of physics + ML, not just abstracted drivers pattern on self play.

Pretty low. The recent boom in rapid AI advancement has been in LLMs, which don't seem directly applicable to autonomous vehicles.

This focus on the hardware is silly. Assume for a second that their new hardware is 50x faster than their last hardware.

That does not mean that their cars can self drive today.

That does not mean that their cars can self drive three years from now.

It's 100% not proven or obvious how car self driving skill and car self driving error rates scale with compute -- but it's surely not linear.


Or fusion power. Or natural language conversations with computers (as opposed to largely rote voice recognition).

Deep learning/machine learning have made remarkable advances in recent years--primarily because of both computational (esp. GPU) and storage/data advances.

However, in spite of a lot of money and talent expended on understanding organic brains and human-level cognition over the decades, progress has been slow and there's a general belief among scientists who work in AI spanning CS and neuroscience that there are aspects to human learning and reasoning that we just don't really understand yet.

And that more deep learning, data, and programmed rules won't get you to autonomous vehicles outside of some limited domains. (Which is valuable by itself; it just doesn't get you to robo-taxies.)

next

Legal | privacy