Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> the point of level 4 everywhere under all conditions

That isn't level 4, that is level 5.

> you are going to have build systems which safely handle that challenge of inconsistent attention

Level 4 is geofenced, but able to perform safely without any level of attention from any human driver. If a human is required to be in the driver's seat to take over on demand, then you are describing level 3, not level 4.

> attempting to switch between full self-driving and human oversight frequently would be a safety hazard

It shouldn't be. If a self-driving system can't safely reach a place to park in a region, with no human interaction, it is not level 4 in that region.



sort by: page size:

> level 4 really is right around the corner

I disagree. Level 4 does not require driver attention. That is still a long way off.

From Wikipedia [1]:

Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.

Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decision.

[1] https://en.wikipedia.org/wiki/Autonomous_car


> One of the world's top self driving car engineers reckons we might never get Level 5.

I have some minor-level experience with self-driving car software, algorithms, etc. The majority of this is from various MOOCs I have participated in over the years, as well as research papers, books, and other things I have read and consumed. In short, I am not an expert, but I am not unfamiliar with the technology, either.

I personally think we'll never see actual widespread usage of self-driving vehicles outside of a very few narrow and carefully controlled (and regulated) cases. At least, I suspect, within my lifetime (which honestly, I'll be lucky for another 30 years or so).

My reasoning is that people will only trust perfection when it comes to riding in a self-driving vehicle. They will only be willing to use one if they can be assured that it will Never Crash, or be crashed into. They have no problem driving a car themselves, or being surrounded by other people driving cars. They have no problem with those crashing and even killing people - maybe even themselves (though they tell themselves fairy tales of it-will-never-happen-to-me to soothe over the reality). But introduce a machine into the equation...

...and that machine has to be Perfect. It cannot make any mistakes. It must avoid issues and be Safe 100% of the time, no exceptions.

In other words, people want the impossible from a machine, but will give utmost allowances to themselves and others as "humans".

I think a lot of this has to do with assignment of blame. When they crash or are crashed into - there is someone to assign blame to; themselves, the other driver, etc. Someone they can yell at, figuratively or literally.

A self-driving car? No one to yell at. No one to assign blame. Nothing that will feel bad for its error or failure to avoid something.

People can't handle that. They don't want a self-driving vehicle that has a safety factor of say, "seven 9s" - it has to be 100% safe or nothing. Because even if it makes a mistake only once out of a million miles of driving, that is still not safe enough. They want the unobtainable - a perfect machine, a machine that will never fail. Nothing like that can or will ever exist (basic laws of thermodynamics prevent it, for one thing).

Even though they themselves, or even the most professional of professional drivers - can't even come close to approaching this level. It both madness, and understandable at the same time.


>Why are we still talking about Level 5 autonomous driving when we can't even get Level 4 working properly? I believe this is sending the wrong message.

i think it is capability gap between tech companies and car companies. Car companies can't even get cameras around all the body to avoid accidental scratches during parking. Where is tech companies, while could easily do a lot of car tech, have no business case doing anything less than Level 5 - the Level 5 is a tech platform where is anything less is an advanced car, and the tech companies are in the platform business, not car business.


> Their decision to limit the initial problem domain is genius

I believe it's called "Level 4" autonomous driving. It's meant to drive only on safe routes for which the carmakers have already tested their technology.

In theory, all self-driving cars should be classified as Level 4 right now, because I don't think any of them is good enough for being classified as Level 5.


>I don't believe that you can have a car engaged in auto-drive mode and remain attentive

I've been saying this for a while and it's interesting to see more people evolve to this point of view. There was a time when this idea was unpopular here--owed mostly to people claiming that autonomous cars are still safer than humans, so the risks were acceptable. I think there are philosophical and moral reasons why this is not good enough, but that goes off-topic a bit.

In any case, some automakers have now embraced the Level-5 only approach and I sincerely believe that goal will not be achieved until either:

1. We achieve AGI or

2. Our roads are inspected and standards are set to certify them for autonomous vehicles (e.g. lane marking requirements, temporary construction changes, etc.)

Otherwise, I don't believe we can simply unleash autonomous vehicles on any road in any conditions and expect them to perform perfectly. I also believe it's impossible to test for every scenario. The recent dashcam videos have convinced me further of this [0].

The fact that there are "known-unknowns" in the absence of complete test-ability is one major reason that "better than humans" is not an ethical standard. We simply can't release vehicles to the open roads when we know there are any situations in which a human would outperform them in potentially life-saving ways.

[0] https://reddit.com/r/teslamotors/comments/8a0jfh/autopilot_b...


> You can’t program your way out of the problem with Level 2; in fact, the better the Level 2 system seems to be, the worse the problem gets. That problem is that human beings are simply no good at monitoring systems that do most of the work of a task and remaining ready to take over that task with minimal to no warning.

Well, you could have extremely visible/audible signals for the driver to take over when the driving system is failing (internal lighting becoming red and blaring alarms), but that wouldn't be very popular I guess, especially with half-assed self-driving systems.


>Highway self driving has been around for decades[1]

Driver assisted highway has been around for years... Level 5 driving requires no human attention. Huge difference.

I think what is wanted by many is level 5 on the highways. I want to sleep, watch a movie, whatever. That is much, much "better" than what we have now. Like many others, I would be most interested in full level 5 on highways and me driving in the city. That is also much easier to implement and test. The scope of the problem is greatly reduced. I think Tesla and others are wasting tremendous resources trying to get the in-city stuff working. It makes cool videos, but being able to do other activities during a 5 hour highway drive has much more value (to me at least) than riding inside a high risk video game on the way to work.

(edit) I get that I am misusing the definition of "level 5" a bit, but I think my meaning is clear. Rated for no human attention for the long highway portion of a trip.


>> Initially, all fully self-driving vehicles will be Level 4—that is, they have to be in geographically constrained areas, and will only operate in good weather, as does Waymo’s fleet of self-driving vans that it is testing in Phoenix. Truly autonomous, aka Level 5, cars are still science fiction.

Nobody has actually created a level 4 system yet, not even a prototype, let alone one ready for production. So level 4, too, is still science fiction. The same goes for level 3, actually. It's science fiction. And so are claims like the following:

>> Researchers at Cleveland State University estimate that only 10 to 30 percent of all vehicles will be fully self driving by 2030.

2030 is in ten years from now. In ten years from now, we'll reach "full self-driving"? Waymo was founded ten years ago and its cars are still in level 2 (allegedly, trying to "jump over" level 3 and go straight to level 4). How are we going to be suddendly, magickally transported from level 2 to level 5 in the next ten years, when we haven't budged from level 2 in the last ten?


> to say you can be a full Level 5 with just cameras and radars, is not physically possible.

Of course it’s possible. Humans manage to drive just fine without even radar. Optical recognition is clearly sufficient.

With that said, we clearly have not solved self-driving cars and it’s very possible that better sensors will improve the tech faster than smarter computing.


>The welder, the courier, the truck driver, the cab driver, the pilot

Nobody involved closely with self driving cars thinks we're anywhere close to a level 5 system. Why do you think we are?


>I understand ‘Level 5’ and I am an active AI researcher.

Then frankly your posts make even less sense, unless you've gotten so deep into AI research that you've lost sight of what humans can accomplish without technology.

>Do you understand what kind of technology is required to get to Level 5?

Utterly irrelevant to the discussion at hand, beyond that by definition L5 means zero human involvement is required.

>Do you think level 5 can be achieved without understanding how humans behave in a vehicle?

Yeah, absolutely. If how humans behave in the vehicle mattered it wouldn't be L5. Active sabotage is out of scope since there is no difference there then a human driver. If you are a passenger and suddenly start hitting your driver in the head on the interstate what could possibly happen? Oh right, accident and everyone is badly injured or dies.

>What if someone sleepwalking or rather sleep kicking and break the windows open then leave a leg or an arm hanging out of it?

Then they'll have shattered glass and bad cuts all over their leg or arm which would likely wake them up and also hurt a lot. They should probably tell their L5 car to go to the ER and also are you fucking serious? It's not as if humans sleeping in vehicles including ones with glass windows is some new thing and somehow an epidemic of people kicking through windows has never come to my attention. Feel free to share your stats on that one.

>Do you think a Level 5 car should not monitor how the passengers behave at all?

No. On the contrary, self driving cars monitoring their passengers at all times sounds terrifyingly dystopian. Exactly what country/agency do you do AI research for?

>Do you have real information/references rather than speculation that Waymo and GM have plans to launch only after achieving Level 5 rather than Level 4 which most observers agree on?

Amongst others: "GM Says Car With No Steering Wheel Or Pedals Ready For Streets In 2019", https://www.npr.org/sections/thetwo-way/2018/01/12/577688125.... A lot of mass publication stuff doesn't use "level 5" specifically because it's not terminology everyone will understand, but if it has zero manual controls and human involvement beyond high level orders ("Go here", "stop") that's what it is.

>Do you have a cost/benefit calculation of why the driver detection system is “not worth it” since it is better to wait for Level 5 before launching anyway?

Why would I need to? You're the one making the assertion that this is something Waymo should do, so presumably you have a cost/benefit analysis right? Waymo apparently doesn't though, and I'm just observing why that would be the case. There are plenty of actual for real jobs being done all the time worldwide that are of the "99% boredom, 1% HOLY SHIT WE'RE ABOUT TO DIE" sort, it's a tough but well known problem that can be worked on in a restricted setting with training, checklists, scheduling and support, no brand new tech required. You're asserting that they should develop a whole new untested complex technology, even if less complex, that will immediately become worthless (or even a negative for those worried about mass surveillance) in deployment. Why do you think this is better then just following decent professional best practices for R&D testing?

>When do you expect Level 5 can be achieved? If your answer is less than 10 years, you appear to know more than most of the top AI researchers I have talked with or read their prognostications.

Irrelevant, and you should really rethink your logic here if you think otherwise. Whether it's 1 year, 5 years, 10 years, or 30 years it's not the public's problem, it's Waymo's and their investors. They can't deploy L5 until it's ready, the timeline for "ready" is just their issue. If they're satisfied that with how their development is going just with human testers and training and the results are what the public wants then how they get from A to B is up to them.


>> When enabled, driver attention is not required.

People need to stop saying this. It doesn't matter if Level 4 specifies this, and someone claims a certain car successfully passes the test. No car in AI infancy is going to be 100%. Drivers must absolutely be prepared to take over in under half a second at all times. That's not likely to change for 30-50 years.

I found it quite socially irresponsible for him to mention during the keynote that it would be fine for his 80+ year old mother to continue driving when she becomes no longer capable of managing the vehicle on her own. It might be perfectly fine in a few decades. Not by 2020 though.


> It would be like if a car manufacturer offered a package called "Full Self Driving" that wasn't actually level 5 self driving.

You mean like something called ‘autopilot’ when you still have to pay attention all the time?


> This is the definition of autonomy levels from SAE

It's still domain dependent, right? If the rules say we're flattening precipitation and visibility within normal bounds, fine, but sometimes you have abnormal weather and badly-maintained roads. It is useful to compare hypothetical cars that autonomously navigate conditions no humans would dare.

EDIT: Never mind. The point of "Level 4" is it is competent in all reasonable operating domains.


> A human and Autopilot working together is safer than just a human driving

This is not my understanding from colleagues who studied the human factors of what is now called level 2/3 automation many years ago. Partial automation fell into an "uncanny valley" in which the autopilot was good enough most of the time that it lulled most human participants into a false sense of security and caused more (often simulated) accidents than a human driving alone.

Since then I've seen some evidence [1] that with enough experience using an L2 system, operators can increase situational awareness. But overall I wouldn't be surprised if humans with level 2+/3 systems end up causing more fatalities than human operators would alone. That's why I'm relieved to see automakers committing [2] to skipping level 3 entirely.

[1] https://www.iihs.org/api/datastoredocument/bibliography/2220

[2] https://driverless.wonderhowto.com/news/waymo-was-right-why-...


> Cars with Level 2 automation can perform key driving tasks like steering, acceleration and braking to keep a set distance from the car ahead, center the car in its lane and stay at a certain speed. Yet they can't handle every situation unmonitored and need the driver to pay attention to the road ahead in case they need to take over the wheel.

Designing a system like that but blaming PEBKAC whenever something goes wrong is going to kill a lot of pedestrians. If you set impossible expectations and people don't meet them, there is a problem with your expectations.

/edit

I'm not against automated cars in general, but if Level 2 means a decrease in fender benders, but a higher level of kids running out in to the middle of the road getting killed because the driver is zoned out, then I think it should be skipped.


> Suppose this is a task that, short of overhauling all our road infrastructure, ultimately requires GAI?

I would be surprised if level 5 self driving requires an AI smarter than a mouse or an eagle — freeform environment, arbitrary weather, hostile others, camouflage(!), long range navigation.

The rest of your points I broadly agree with though.


> If you're an above average driver, a below average driver will ram your car unexpectedly. And there will be nothing you can do, because your reaction times are human.

That doesn't require level 5 self driving cars, only brake assistants.

> Also, you're likely not as above average as you think yourself to be.

That doesn't mean that there aren't above average drivers. I'm not assuming I'm among them, but for them, driving in a self driving car would make matters worse.


>All of which would disappear with level 5 self driving.

I think the post was about managing the risk that occurs before level 5 is reached. Assuming that it’s either on the immediate horizon or a foregone conclusion seems to be dismissive of those nascent risks

next

Legal | privacy