It's the curse of SAE Level 3. The driver is responsible for monitoring a system that works fine 99% of the time, they start to feel a false sense of security and when it does fail they're not paying as much attention as they need to be.
This is also why SAE autonomy levels are a poor measure of a vehicle's capabilities, but that's another rant.
> level II is probably the most dangerous automation
I'll go further and say that level II is worse than useless. It's "autonomy theatre" to borrow a phrase from the security world. It gives the appearance that the car is safely driving itself when in fact it is not, but this isn't evident until you crash.
N.B., this policy is mainly concerned with Highly Automated Vehicles (HAVs), which are defined as SAE Level 3 ("capable of monitoring the driving environment") and above.
edit: as to SAE Level 2, it has this (and more) to say:
> Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over-relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997).
also,
> Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.
You seem to be hung up on SAE definitions and missing my point which I'm going re-iterate, maybe from a different angle.
Moving from supervising the car to not supervising the car isn't a binary flip where suddenly in one software revision, you can take a nap while yesterday you couldn't. It's a spectrum. And that's why SAE levels between 2 and 3 are poorly describing this (not to mention how poorly level 2 itself is defined, as it covers a huge range of functionality, from something a bunch of cheap sensors can achieve with graduate level of CS knowledge to something Tesla AI has achieved with FSD Beta which has required custom computers, millions of miles of driving data and some of the biggest brains in AI world).
Since it's a spectrum, the only variable changing is how likely it is you as the supervisor need to take over control *because* your car either made a mistake or was about to. That's all it's reduced to — how often does you car make a mistake. That's all level 3 and up is. All the descriptions and charts do nothing but fog up this, which is unfortunate. Once you have a car that makes very few mistakes, you don't need to supervise it because the probability of it making mistakes is less than you as a human driver at which point it's a better driver than you anyway.
You can of course argue that reduction in mistakes is itself functionality. Well, I make a distinction between continuous and minor refinements and major enabling technology, like vector reconstruction of 3D space from images of cameras, or AI based route planning, which given more data, can plan better.
As far as I understand from Tesla's progress, they need to merely cover ever more corner cases to go up the levels.
And on the topic of supervision: whether you have to keep your hands on the wheel or otherwise supervise the car has a lot to do with policy and regulation. You can have a car today that is safe enough to drive while you asleep and good luck trying to sell it without telling the customers they must stay alert. This basically makes level 3 as defined in SAE subject not only to actual capabilities of a car but the regulatory environment in which it's sold.
> According to the SAE levels of driving automation, a Level 3-capable vehicle can take over certain driving tasks, but a driver is still required to be present and ready to take control of the car at all times when prompted to intervene.
The first part of the article mentions the driver being able to take a meeting or watch a movie. That’s complete horseshit if they need to be ready to take control of the car at all times.
I think Lvl3 is one of the most dangerous levels a car can be at, based on the experience Waymo (back then Chauffeur) made: Humans are even worse at monitoring than they are at driving. It's incredibly boring. Their test-drivers (who got extensive training for what to do and what not to do) started to do anything but pay attention, including taking a nap, before they pulled the plug on the incremental approach.
And we see this partially with Tesla now. Some drivers start to trust the system so much that they start to do anything but pay attention - partially even bypassing the pesky systems that remind them that they are required to take over at any moment.
That’s the trade-off between level of responsibility and capability at our current level of technological progress. Level 2 systems operate in much broader conditions but have high enough rates of failure that a driver must be attentive at all times. The jump to level 3 is mostly about responsibility. That higher level of responsibility means that the car is not going to drive itself unless it’s absolutely sure it can do it by itself.
> Levels 3-5 are considered Automated Driving Systems (ADSs) in which the driver does not need to pay attention to the road.
I'm going to say that's patently wrong.
as the article points out at level 3, the control system is likely to drop out and punt hard decisions over to the driver at any time. This is exceptionally dangerous because there is almost never enough time to react properly (unless the driver was actively paying attention)
at level 3 you really still need to pay attention.
I think this indicates where level 3 is a slippery concept: it looks like it could be useful in this case, if you think only of the times when it works properly, but the only reason for a system to be level 3 is that it cannot qualify for level 4, meaning explicitly that it cannot be considered safe enough without having an alert human to fall back on.
I took a look through the level descriptions for a level where the human is driving but is being monitored by the system, but they all seem to be versions of having the system drive while being monitored by a human. Maybe we can do more in the way of collision avoidance as the technology progresses towards true autonomy.
Because it's a Level 2 system, and it doesn't actually know how to drive a car.
Similar to how cruise control (Level 1) doesn't know how to drive a car. Yet people are willing to put up with the concept that it causes some accidents.
From the regulations: "Fall back strategies should take into account that—despite laws and regulations to the contrary—human drivers may be inattentive, under the influence of alcohol or other substances, drowsy, or physically impaired in some other manner."
NHTSA, which, after all, studies crashes, is being very realistic.
Here's the "we're looking at you, Tesla" moment:
"Guidance for Lower Levels of Automated Vehicle Systems"
"Furthermore, manufacturers and other entities should place significant emphasis on
assessing the risk of driver complacency and misuse of Level 2 systems, and develop
effective countermeasures to assist drivers in properly using the system as the
manufacturer expects. Complacency has been defined as, “... [when an operator] over-
relies on and excessively trusts the automation, and subsequently fails to exercise his or
her vigilance and/or supervisory duties” (Parasuraman, 1997). SAE Level 2 systems differ
from HAV systems in that the driver is expected to remain continuously involved in the
driving task, primarily to monitor appropriate operation of the system and to take over
immediate control when necessary, with or without warning from the system. However,
like HAV systems, SAE Level 2 systems perform sustained longitudinal and lateral
control simultaneously within their intended design domain. Manufacturers and other
entities should assume that the technical distinction between the levels of automation
(e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.
And, systems’ expectations of drivers and those drivers’ actual understanding of the
critical importance of their “supervisory” role may be materially different."
There's more clarity here on levels of automation. For NHTSA Level 1 (typically auto-brake only) and 2 (auto-brake and lane keeping) vehicles, the driver is responsible, and the vehicle manufacturer is responsible for keeping the driver actively involved. For NHTSA Level 3 (Google's current state), 4 (auto driving under almost all conditions) and 5 (no manual controls at all), the vehicle manufacturer is responsible and the driver is not required to pay constant attention. NHTSA is making a big distinction between 1-2 and 3-5.
This is a major policy decision. Automatic driving will not be reached incrementally. Either the vehicle enforces hands-on-wheel and paying attention, or the automation has to be good enough that the driver doesn't have to pay attention at all. There's a bright line now between manual and automatic. NHTSA gets it.
It’s not about insurance, it’s about the definition of level 3 automation. It requires the vehicle to have very high confidence that it can drive without mistakes, because a human is not monitoring the driving.
Level 2 systems can often work in more conditions because they have a lower requirement for reliability. They are permitted to make mistakes without realizing it, because the human driver is required to monitor and override the system.
The problem with Level 3 is that is requires much of the same abilities and sensors as Level 5 in order to determine if the conditions are safe, react to unexpected events, and deal with the driver not taking over.
That's not how human attention and focus works. There are literally decades of study on this sort of thing. Almost every other major carmaker and autonomous vehicle development company (except Uber, and some poor woman was killed because of it) have publicly abandoned plans for SAE Level 3 autonomous vehicles because over years of testing they've found that their own engineers, test drivers, directors, etc, who are intimately familiar with the technology and the peculiarities of their own product, can't stay focused on the car and the driving environment under those conditions.
In this case the safety drive was watching their phone which was playing a movie so a bit beyond the normal level of attention wandering you'd expect. But even for people trying their best having to pay attention to a road for hours without having to provide any input isn't something you can reasonably expect people to do. NHTSA level 3 autonomy is just a bad idea, we need to go straight from 2 to 4.
Because Uber and possibly most other self-driving car makers aren't doing it right.
First off, Waymo, Ford, and even Volvo (whose car was in the accident) have said that they want to skip Level 3 because expecting people to actually watch the road while they're "self-driven" is not reasonable. They saw from initial testing that the drivers were dosing off.
Second, if they were to at least attempt to do it right (like Tesla is doing now, but didn't do in the beginning either) is to force drivers to keep their hands on the wheel at all times.
Also, Uber could have been watching its drivers with the cameras, and fired those that didn't stick with the program.
And all of this doesn't even mention how Uber's self-driving system seems to be terrible. Lidar wasn't working, the car didn't brake, and they've had one intervention per 13 miles while Waymo has one per 5600 miles.
So, my point is, Uber, the company, still seems to share most of the blame here, for all of those different reasons.
NHTSA/SAE/UNECE refused to listen to Google and defined that the technology is to advance through their Level system that allow progressively slower response time for the driver/operator.
Level 2 requires instantaneous takeover should the system fail, Level 3 system must tolerate ~30s delay, etc. in feature checkbox style.
Automakers are fully committed to the Level system, but it does not agree with the reality that algorithms don’t fail gradually like a turbine engine would, so the reality do not agree with the Level system and Level 3 is just going nowhere.
Your confusion around Level 3 comes from the fact that Level 3 do not exist. So no worries.
This is also why SAE autonomy levels are a poor measure of a vehicle's capabilities, but that's another rant.
reply