Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Maybe I'm ignorant (very likely) but I've never understood the difference between full autonomous driving and a fully intelligent AI.

If a car can drive in any environment and react to any circumstances surely said AI could also do anything else, no?

EDIT: Thanks for the responses everyone. Though, to clarify, when I think of "full autonomous" driving, I'm thinking of a car that can go from A to B regardless of the context. Meaning, if some of it is offroad it'll handle that, if there's traffic that'll be handled. If there's something wrong with itself, aka the car, it'll be introspective and call for assistance, sending its location, etc. Not just following marks on a road and pulling over and giving up if it can't get there.

Also, I do not think a "fully intelligent AI" can necessarily solve any problem, but is capable of learning such that it could. I the purposes of a discussion I'd equate it to maybe a 5 year old.



view as:

Driving is just a tiny subset of what a general AI could do. In the grand scheme of things to which AI could occupy itself with, it's an absolutely trivial undertaking.

nope. Because they're not building something that has decision making or consciousness. What they're building is a very complicated and nuanced math equation, that reads along the lines of: when a camera sees a line of X whiteness at Y distance, and we are travelling at speed Z, in this geographic region, turn the wheel left E percent. What these companies are doing is trying to calibrate such an equation so well, that it can successfully navigate roads on its own.

So true. I've always thought that fully autonomous driving is a slippery slope to AGI. Imagine Google having to collect an INFINITE amount of driving data (the perfect manifold) before they realize they've sunk billions into an intractable problem?!

Problems in AI can very easily turn into a black hole of money and data. I hope the large number of AI startups appreciate this.


The difference is that the same algorithm that drives a car can't be used to do other tasks. The self-driving car algorithm can't just be plugged in to recommend a movie or summarize an article. Many of the same underlying components can be re-used, but they have to be trained on data for the new task.

AI is kind of a silly term, imo.


I do not think that a 'fully intelligent AI' is necessary for full autonomous driving: if a universal communication protocol for driving were to be introduced, cars could communicate with one another as well as announce themselves when entering intersections for example, therefore mitigating the need for an 'AI'.

"I am driving at 45mph at [GPS Coordinates] [UTC Timestamp]".

"I'm braking [GPS Coordinates] [UTC Timestamp]".


And then an unequipped vehicle comes along; or worse, one broadcasting with malicious intent.

Can you imagine writing a computer program than can drive a car in a video game, but not provide relationship advice to two freshly married people, or think of a fair and just and enforceable law to reduce discrimination at the workplace?

Driving is quite a bit short of "same intelligence as humans"


what if there are 2 obstacles that the car can't brake fast enough to avoid so it has to decide which to hit. one is a person pushing a baby in a baby carriage, the other is a person pushing someone a wheelchair

Do you think we will tolerate autonomous cars operating at speeds that create those situations?

if someone steps out in front your car going 30 MPH can you stop in time? i mean they walk right into your immediate path. do you think this kind of thing can't happen?

That isn't the problem you posed. Now there is no maneuver that can save the pedestrian, for any driver.

But for the most part, if there is low visibility and pedestrian proximity, 30 MPH is too fast. The streets here don't have much obstructing visibility other than parked cars, there are good sidewalks set well back from the streets and the speed limit is still 25 MPH.


The exact same thing that happens now: someone dies, usually selected arbitrarily based upon the driver's selfish desire to minimize harm to themselves. Because that situation doesn't allow enough time to make anything except instinctual choices.

The driver is found at fault for that death if they were not driving legally. Otherwise, and very often times regardless, the whole thing is written off as a tragic accident (millions happen every year, just in the USA), and the insurance people do their thing.

Ethical dilemmas are a non-problem for self-driving cars.


Millions???

You are off by two orders of magnitude. It was 35,000 two years ago.

Still a lot of people, but not millions.

Put in context, it's around one death per 100 million miles driven.


Sorry, I mixed up the global and national numbers.

Thanks.


I will loosely paraphrase something I heard Sebastian Thrun remark on this situation (commonly known in philosophy as "the trolley problem"):

1) such a situation happening is extremely rare (more so in the classic example of the trolley problem)

and

2) regardless of its possibility, if we are able to reduce the number of fatal and injurious accidents caused by automobiles by half, in the United States alone, such tragedies should not stop us from doing so

For some reason, people always let the impossible attainment of perfection be the enemy of good enough (as the old saying goes).

We will never get a perfect system. We don't have a perfect system right now. What we can get (indeed, I'd argue we're already there in the case of self-driving vehicles) is close enough that the difference is fairly negligible. That is, such systems can be made vastly better than the average human driver, and generally will come close or exceed that of professional drivers.

We are arguing that we'd rather have a professional taxi driver at the wheel of a vehicle transporting us, who is 98% competent at his job (that's being generous, actually), rather than a machine which is 99.95% competent doing the same job (note that neither percentage is based in reality - I pulled both from my nether regions for this example - but they probably aren't too far off the mark, either - well, again, I'm being generous with the taxi driver).

It's a purely irrational and emotional response not based on actual statistics and knowledge about accident rates. We'd rather continue letting drivers get in accidents, injuring or killing themselves and others, at a very high daily rate, than implement a technology which would rapidly make that number drop to very low levels over time.

Part of it I think is that we want to be able to blame somebody. We can't blame a machine, for some reason, or its manufacturer - especially in the heat of the moment (provided we survive). We can instead blame ourselves, or more generally "the other guy" for being a bad driver. We have this innate problem with being able to say both "I don't know" and "bad things can happen for not any good reason", and instead must find something or someone to blame, and if it isn't ourselves, even better (hence a lot of religious expression not based on reality).

There is so much benefit this technology can bring, even today. I don't personally think it is ready for consumer adoption quite yet, but I can see it being ready in less than 10 years, maybe less than 5. The problem for its adoption is that we'll likely never be ready, even if it attained a five 9's level of reliability, simply because if it failed, we'd have no one to blame but ourselves for trusting in it. For some reason, that simply will not do. We'd rather continue with the status-quo and continue to rack up the injuries and death, because at least then, we can blame the other guy instead of ourselves.


Depends: does the latter couple need relationship advice?

Not fully intelligent. More like animal intelligence, and a subset of that. No off-roading (which animals have no problem with).

I have to account for offroading animals while I drive/bike because deer love to sit on the edge of a road, and inexplicably leap in front of me at the last second or if I see some crossing the road ahead, know I have to slow down just in case it has some deer in the pack behind it that are following or the deer changes its mind because it doesn't want me to be in between it and it's herd to go back where they came from. This can involve scaling what appears to be a ten foot almost vertical wall of rock. Fawns will often wait until you get right next to them before moving because their defense mechanism is sitting still and hoping to not get noticed. So automated cars need to know what deer look like and how they behave since they are so common. Very occassionally I've had to avoid mountain lions and bobcats but based on road kill I've seen most people just drive over bobcats.

As others have said, the scope of 'Anything that can happen on a road', while large, is a lot smaller than 'Anything that can happen or be imagined to happen'. Moreover, if a pilot AI finds itself in a situation it can't deal with, the failsafe of "hazards on, slow the car and pull over" is nearly always available. There's not really an equivalent for a general AI.

Think about the difference between a specialized AI that can beat a grandmaster at chess (using heuristics) versus a generalized AI that can learn how to play chess and then figure out how to beat a grandmaster on its own.

The comparison to animal intelligence is apt.

Animals are very good at responding to specific situations with fixed action patterns.

Humans are very good at adapting to new situations by training new action patterns.

Self driving cars tread far more in the territory of machine learning than AI. AI involves ML, but ML doesn't necessarily involve proper AI. ML is essentially creating a framework of existing data that the computer can apply to similar situations. This is why you hear terminology like "training a dataset" in ML -- you are just telling it to act upon new situations in as similar of a capacity as possible to previous situations.

AI to me is a very different thing, in that it doesn't require structured inputs and outputs. Even as chaotic as autonomous driving is, it's still a structured system that takes inputs about road rules and surrounding objects and aligns the car with an outcome where it follows road rules and doesn't hit anything.

True AI is a machine that can reason for itself in unstructured situations, which would likely involve ML that is very good at making inferences about how existing data applies to tangentially related situations. I'm sure at some point there will be a distinction between AI that aims to create outcomes as similar as possible to human decision making (perhaps it could be trained by you to make decisions the way you would), and AI that aims to emulate human thought process literally down to the neurotransmitter level.


Legal | privacy