Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

So true. I've always thought that fully autonomous driving is a slippery slope to AGI. Imagine Google having to collect an INFINITE amount of driving data (the perfect manifold) before they realize they've sunk billions into an intractable problem?!

Problems in AI can very easily turn into a black hole of money and data. I hope the large number of AI startups appreciate this.



sort by: page size:

The reality is that life is full of edge cases. For full autonomous driving we probably need full AGI.

This article is kind of a wrapper for the linked bloomberg article, which is more interesting IMO.

Driving of cars is a massive cooperative game with high stakes, and autonomous cars essentially need AGI in order to play to a degree that is safer than a human with other humans. Fully autonomous cars would be sick, but IMO you'd need massive infrastructure changes (realistically restricted to cities/urbanized areas) if you want autonomous cars to work with anything less than AGI. Until companies start pursuing that, they are actually unknowingly using all that money to push for AGI and obviously coming up short because they don't even understand what they are trying to do.


My gut feeling is that we'll solve AGI before we solve the niche case of fully autonomous vehicles.

AGI is a generalization and super set of self driving cars and has a far wider impact and set of people researching it.

The two problems seem equivalently hard in that fully autonomous vehicles must be five nines (?) reliably safe. That's a ridiculously hard problem.


Driving on roads with human drivers, bikers, pedestrians, animals, freak weather events, maintenance and construction workers, and all manner of situations is - in reality - a problem of unbounded complexity. Unless we create infrastructure that radically reduces the complexity, like invisible “tracks” and banning as many variables as possible (other humans) then nothing short of AGI will get us close to the current below-average human driver.

Self driving cars is Silicon Valley hubris and hype at its finest. I genuinely feel proud about the computer scientists who have wrangled millions of dollars from investors, truly excellent work, I wish all engineers could get that kind of money. Hopefully the billions keep flowing.


It's becoming very clear that we'll have to solve general purpose AI to solve self driving completely.

Yeah, that's the problem. They're never gonna get there at that rate.

>In the view of Levandowski and many of the brightest minds in AI, the underlying technology isn’t just a few years’ worth of refinements away from a resolution. Autonomous driving, they say, needs a fundamental breakthrough that allows computers to quickly use humanlike intuition rather than learning solely by rote. That is to say, Google engineers might spend the rest of their lives puttering around San Francisco and Phoenix without showing that their technology is safer than driving the old-fashioned way.

The view that something fundamental is missing has been prevalent amongst Google's best self-driving engineers since sometime in 2015. Howver, at that time they still entertained optimism that the AI field was moving fast enough that the fundamental missing thing could be discovered.

When ex-google self-driving project director Chris Urmson, Drew Bagnell and Sterling Anderson founded Aurora in 2017, they touted the mantra that "you can't build a ladder to the moon"

Meaning, that, while self-driving industry can make continual incremental step-by-step progress with the technology they have, they will not within a reasonable time frame achieve their objective without a fundamental change.


I don't think that you can do autonomous driving without a pile of cash. The problem is too complex.

It's amazing after all the billions they've thrown into autonomous driving that it's still isn't a solved problem.

Must be an extremely hard problem.


That's hard for humans too. I think we need to give up on the idea that fully autonomous driving will be perfect.

Yeah it's been really odd to see the take that self-driving must require strong AI. It needs to be done carefully, but it's clearly a manageable engineering problem if you have good sensors.

Self driving, I think is where we got it backwards. To drive correctly we need AGI because 'uncommon' but not 'rare' events are avoided by drivers all the time and in a manner you have to understand how the world works and how people think.

LLMs, especially the multi-modal ones go much further to making self driving possible, that is if we ever get to the point they can incorporate and act on data fast enough. We may still have a hardware problem for some time there.

---

But on your second point I am with you. There is nothing obvious about at what level emergent behavior occurs in the models. Just keep adding parameters and new things keep popping out with no real means to determine where and when seems kinda sketch as we push these things past the exa scale.


Lex Friedman's team at MIT seem to think that fully autonomous driving is a long way off: there are just too many edge cases to deal with.

https://arxiv.org/abs/1711.06976

From the abstract: "Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving".

Now there's room to be optimistic or pessimistic about how much an AI can help, but the academic consensus is that level five will need many more research breakthroughs. There are some good video lectures that give an overview of the field (but not that much depth) here:

https://selfdrivingcars.mit.edu/


Right, so you're saying that actual reality is too hard for self-driving systems to deal with, so we should change reality in order to coddle the self-driving cars so that they can handle it.

...which reinforces the point me and the grandparent post was making: You can't solve self-driving without AGI.


I'm wondering how this is going to be different than self-driving, where we can get 90% of the way there, but the last 10% is notoriously difficult with not nearly enough edge cases represented in the data

Really? You think full autonomous that's better than human drivers is not AGI? Perhaps time will tell if your opinion is right.

The thing is, I see no way to have full self driving without AGI. And I don't think humanity is anywhere close to developing AGI. Without AGI you can have level 4++ self driving, maybe, but not level 5.

While I agree that AI in current forms can be very useful, I believe that the problem of e.g. driverless taxis requires understanding of other humans and empathizing with their intentions to be truly viable. Driving is a social activity, and the current self-driving tech is about as convincing as trying to carry on a conversation with Alexa at a party. I do believe that we need AGI before self-driving will be more than a better cruise control.

I think the better cruise control is very useful and I love to see it, but Tesla’s marketing of it as “full self-driving” is disingenuous as best, and industry-chilling + deadly (as we’ve seen) at worst.


You’re contradicting yourself. You say there are fully autonomous vehicles but they aren’t capable of autonomously driving themselves fully. Ie they aren’t fully autonomous. So that aren’t any fully autonomous vehicles.

Ok so now you’re saying that AGI is impossible. That’s a completely different argument. And it’s wrong to reject regulation based on that argument because it can’t be proven. So you can’t reject regulation that way and you can’t reject it with your idea about humans having intrinsic value. So you are forced to support regulation of AI.


Yeah, the infinitely long tail of exceptional circumstances mean that automated driving is an AGI-complete problem. These exceptions are fairly rare, but once there are many driverless vehicles they’ll be more common. Vehicles will need to learn and problem solve to keep up.
next

Legal | privacy