Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Arizona Uber crash driver was 'watching TV' (www.bbc.co.uk) similar stories update story
131 points by room271 | karma 1210 | avg karma 4.32 2018-06-22 04:59:40 | hide | past | favorite | 198 comments



view as:

> The Tempe police report said the crash was "entirely avoidable" if the Uber operator, Rafaela Vasquez, had been watching the road while the car was operating autonomously.

And thus should end the bewildering “but you couldn’t see her to the last minute and I don’t understand how bad cameras are in low light”


Yes I don't understand why so much focus has been put on the failure of the technology rather than the failure of the back up driver. We all know that self-driving technology does not exist yet, that is why the back up driver is there. If you have driven at night you will know that a human can see much further than what is shown on the video footage of this incident and so it should be clear that the human back up driver was to blame. Unfortunately for her, the fact that she was not only not watching the road but distracted by a TV show makes any claim that she was not negligent very hard to defend.

As a general rule all of the failure of the technology complaints that have registered in my head from HN has to do with Uber intentionally turning off safety features and doing things in such a way that the driver might have assumed things were safer than they actually were and then have decided, idiotically as humans do (humans - such as myself I assure you - being idiots), that it was totally ok to do something stupid like text on a phone or watch tv.

Just because one party is neglicent does not mean other parties are cleared of their neglicence.


> Yes I don't understand why so much focus has been put on the failure of the technology rather than the failure of the back up driver.

Waymo specifically monitors the driver plus checks their attention span before hiring them. There's various things which went wrong for this to happen. It's not just one cause.


It’s because this is a really well understood problem with human attention. If you put people in a position where they’re supposed to watch for something which rarely happens, they’ll get bored and distracted. People who build systems for industrial production, monitoring, security, etc. have been dealing with the for a long time and it’s one of the selling points for self-driving technology since inattention causes many accidents per year.

Responsible engineers deal with this in a number of ways, such as the test images TSA screeners see throughout the day, Waymo’s reported careful monitoring of the drivers, or redesigning the process so the human is continuously engaged. Uber engineers apparently say YOLO! and disable the car manufacturer’s own safety systems.


>Yes I don't understand why so much focus has been put on the failure of the technology rather than the failure of the back up driver.

It was well known all along that humans are poorly equipped to remain actively engaged in situations that require split-second reactions, while exerting zero effort or control for hours on end.

Air traffic is as safe as it is because the accident analysis always pushes past "pilot error" as the root cause. If one pilot can make the error, why won't another? Did the system set them up to fail? Even if it didn't, could the system be enhanced such that the failure (or class of failures) could be prevented?

In the case of the human in Uber's self-driving car, they were set up for failure. And in such a well known way that makes any claim Uber was not negligent very hard to defend.


As far as Uber being negligent for putting an easily-distracted human in charge of a car that ostensibly (even if they claim otherwise) needs little attention, I totally agree they are negligent. The reason for that is I believe it's unreasonable to expect a human driver to pay 100% attention to the road while in a car that only requires 5% attention to the road. However, I also believe as well as telling the driver "pay attention to the road 100% of the time" she would have been told "Do not use your phone while driving" which I believe is a totally reasonable request and expectation. With this assumption and if the blame has to fall on only a single party then I think this news puts it squarely on the driver.

Aren't they required to use their phones to record diagnostic data? It's a small step from there to messing around on the phone.

What? Why don't they have their hands on the fucking steering wheel?

They didn't want to pay to have two people in the car anymore.

No, of course not. They have to pay attention as a driver. The car does all the diagnostic recordings.

That's false though. http://theconversation.com/preliminary-report-on-ubers-drive...

> According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review.


> However, I also believe as well as telling the driver "pay attention to the road 100% of the time" she would have been told "Do not use your phone while driving" which I believe is a totally reasonable request and expectation.

I think if you are going to pay someone near minimum wage and let them hold onto their phone while in the car, you know they are likely to use it. Even if the request is reasonable, you know minimum wage workers are going to do that sort of thing if not monitored via an in-car camera directed at the driver's seat.


You could pay me an absurd amount of money, I'm still going get distracted.

I believe all other driveless tests work the same way, with sensors but requiring human driver at all time. Would you also consider Waymo, Cruise, et al, negligent?

The technologies are not ready. They are being tested. Aircraft get to such a safe state like today after so many iterations and improvement, both in technology and process.

A terrible tragedy happened in this case. Hopefully it will help save a lot of lives in future.


> Would you also consider Waymo, Cruise, et al, negligent?

Absolutely. We know from centuries of evidence that humans are profoundly bad at monitoring 'mostly autonomous' systems. Waymo/Uber know this as well, they just employ the driver as a convenient way to offload legal liability while they test unproven and unsafe technology on the public without consent.


There is a difference between putting technology out there that is "not ready" and putting technology on the road that fundamentally doesn't do the basic things that a car must do, such as brake for pedestrians. It's an unfortunate but avoidable tragedy if a self-driving system fails due to a computer bug. Here, Uber disabled the self-driving system's emergency braking function: http://www.latimes.com/business/autos/la-fi-uber-arizona-nts.... As far as I know, Waymo at least is at least theoretically capable of braking for a pedestrian in autonomous mode.

The overwhelming majority of vehicles on the road today don't break for pedestrians. Just a few models of cars sold in the past few years have features to automatically engage braking.

https://www.consumerreports.org/car-safety/where-automakers-...


If these technologies, as you say, are not ready (I think you are right), what are they doing on public roadways?

The main issue is that one can't really consider the technology ready for real world testing until said real world testing is done.

That's obviously not true. Real-world testing should be the last phase, after the car has succeeded at simulations. They're building a mock urban environment near my home for just this purpose.

Tests by definition can be failed. If you couldn't fail, then what's the point?

You can pass all the prerequisites, but none of that guarantees that it doesn't fail at the real world testing phase.

All that said, in this specific case, Uber absolutely hadn't passed all the prerequisites and shouldn't have been given any permission to be on real streets.


Well, what you're calling "passing the prerequisites" is no different than "being ready." It's the same as medicine testing; they don't go straight to giving the drug to patients.

Waymo seem to have gone to quite some effort to make the failure modes of their tech as benign as possible, preferring to stop the car and make the safety driver take control rather than risking it and hoping they'll step in. They may also have safety drivers that are simply a lot more engaged in actively monitoring what the car is doing even when it's not doing anything that requires them to intervene, based on the reporting I've seen lately.

As far as I am concerned, the issue has always been, and remains, that passive alertness - the assumption underlying the claim that partial autonomy is viable - does not work. All arguments about what drivers ought to do is irrelevant in the face of what they actually do.

This is not an either-or situation. Both the driver and Uber are fully culpable for this vehicular homicide. Both of them have to be aware that any mistake on their part may result in somebody's death and act accordingly.

Probably PR spin/lobbying as the backup driver is just an excuse to beta test dangerous technology.

> If you have driven at night you will know that a human can see much further than what is shown on the video footage

This is honestly what bothers me most. Firstly, I expect self driving to have massively more information at hand than humans do. I'm talking long range, 360deg sight. I expect multiple sensors to be recording at all time, analyzing. And finally, I expect a "black box" to be able to visualize all of this for cases like this.

Visualizing is something I think needs to be black-box level standardized. It's what we need to hold them accountable, I think. We're dealing with proprietary implementations - so humans should be able to review black box "footage"[1] and decide if the scenario was handled better or worse than a human.

There's going to be times where this system fails, and I'm not expecting perfection. Yet, humans in non-private positions need to be able to regulate how Uber and friends are behaving on the road.

A short sighted grainy video that looks like it was from 1990 is not cutting it. Fully unacceptable in my view.

[1]: I say footage loosely, because I imagine a lot of the scanned data is just that, data. So it needs to be rendered in a human friendly, visual manner, fully understandable by lawmakers.


because people have been saying for years that the idea of a backup driver is absurd and dangerous because they won't pay attention for hours on end effectively.

We have a scapegoat, move on now.

Why manslaughter though? Wasn't the victim the real offender, crossing a road on unmarked spot without paying attention to oncoming traffic? It's a safety failure of a self-driving car tech surely, but the offense came from the victim. Especially given how some human drivers can't avoid similar situations either (see some videos of Chinese driving)...

As driver of the car you need to be able to stop the car at any time for any reason, otherwise you're not in control of your vehicle. By not being in control of your vehicle you accepted the fact that you might seriously harm someone.

This is why it's manslaughter.


> Wasn't the victim the real offender, crossing a road on unmarked spot without paying attention to oncoming traffic?

If someone makes a mistake while in traffic I'm much more happy do not immediately die as a result. There's been a few instances such a thing could've happened to me.


An attentive driver could have at the very least mitigated what happened. Maybe even slowed down enough in order to not cause lethal harm.

As a driver you're sent to driving school and you're operating a 2 ton death machine. If you're not in control of your vehicle for whatever reason (alcohol, drugs, phone) you're already at fault, even when you're not the only person to blame (Uber, the driver and the cyclist).


Many drivers get into a state of shock when SHTF, i.e. they are going into an accident and suddenly are paralyzed or doing some boneheaded moves (like bikers turning more which has the opposite effect). You can see it on many videos from China when drivers just can't believe what has just happened and continue, likely in a state of a deep shock. Or when you are face-to-face with a deer on the road, lot of space on each side but you still hit the deer because you were mesmerized by it. You should always predict what could happen even if you are a pedestrian or a biker (oh, cars surely won't open doors while I am on a bike path, right?). Casually strolling through a road with traffic without caring about approaching cars is simply irresponsible.

Ofc Uber driver was distracted, it was clear from the video. But I would put 80% blame on the victim.


"Ofc Uber driver was distracted, it was clear from the video. But I would put 80% blame on the victim."

For what it's worth I didn't downvote you, but think you're completely wrong.

When I cross a road and see a car approaching it's not unreasonable to believe that the driver reacts. Especially at a well lit place (which apparently it is and for which the video gives a very wrong impression).

In my book the company as well as the "safety driver" who's aparently so stupid that she watches TV when her responsibility is to supervise a deadly machine are criminally culpable and if I had any say she, as well as the responsible head of Uber's "self driving" unit should go to jail for that.


It's pretty normal behavior to rely on other participants (here cars) to use their eyes and common sense. Even in cases where e.g. crossing the road is not allowed.

Manslaughter should fault of the Uber programmers

A backup driver should be just that -- someone there to take over in case of software/hardware error. Such a driver was deemed necessary, failed to do their job, resulting in the crash.

If the system was supposed to be entirely driverless and autonomous, then sure -- perhaps more blame would lie with those in charge of whatever kind of "certification" procedures take place (rather than the developers). But IMO, in this instance the blame lies with the backup driver, and Uber for not enforcing more training or monitoring of the backup driver to prevent this kind of horrendous incident from happening in the first place.


There was no software or hardware error. It didn't break, it worked exactly as it was programmed. Which is the problem, and one of the many reasons Uber is at fault.

I don't want to argue over the semantics of this being a software or hardware error. Clearly the car didn't perform as intended. A vast majority of software issues occur even though they are functioning "as programmed", it doesn't mean that they aren't bugs.

I'm not saying Uber aren't at fault -- an Uber employee failed to do their job by watching TV instead of concentrating on the road, and Uber failed to do sufficient training/screening or failed to put enough safeguards in place to stop this from happening.

My point is that they KNEW that the automated system side of things wasn't perfect which is why they employed someone to look after the car in the first place. Yes, the car failed to stop react to the pedestrian -- but the safeguard (i.e. the human in the car) failed to do their job, which was to step in under such circumstances.


The driver was responsible for the car, and carries most of the responsibility for this accident. There may be grounds for a civil, or even criminal, action against Uber on the basis of creating a dangerous situation, but that would be a bad executive decision, not a programming error.

What about Uber's policy, which used to have a human passenger recording events and anomalies into a device for later review? I say "used to", because they axed that for cost reasons, deciding they could have the driver do that. While driving. Or at least being ostensibly able to drive at a fraction of a second's notice...

I am no fan of Uber, and I agree that eliminating the recorder was an irresponsible move, but as the driver was not distracted by these responsibilities at the time, they did not have any role in the accident. In the court of human opinion, the cost-cutting is evidence of Uber not taking safety as seriously as it should have, but I do not know if it would be allowed as evidence in a court of law.

Maybe it's different where you live, but where I live the only place where a pedestrian doesn't have right of way is a motorway.

Unfortunately, it seems it's not the case in Arizona: "Pedestrians must yield the right-of-way to vehicles when crossing outside of a marked crosswalk or an unmarked crosswalk at an intersection."

http://www.ncsl.org/research/transportation/pedestrian-cross...


It is also almost always the law that drivers must take any reasonable actions possible to avoid an accident even if someone else broke the law in a way that makes an accident more likely. So even if the pedestrian did fail to yield right-of-way, the driver is still required to stop to prevent an accident if they are able to.

One of the most jarring things about visiting Niagara Falls in Ontario was a big sign at the crosswalk warning that vehicles, rather than pedestrians, have the right of way. I don't see any good justification for making things this way except to basically give a free pass to anyone who kills a pedestrian.

It's always made perfect sense to me that vehicles should have right of way on roads. They're intended for vehicles. Giving pedestrians the right of way on the road makes about as much sense as giving cars the right of way on the sidewalk.

The crosswalk is made for vehicles? How is anyone ever to cross the street? Why shouldn't the person operating the dangerous machine requiring a license be subject to greater responsibility, for that matter?

http://www.ncsl.org/research/transportation/pedestrian-cross...

"Arizona: Vehicles must yield the right-of-way to pedestrians within a crosswalk that are in the same half of the roadway as the vehicle or when a pedestrian is approaching closely enough from the opposite side of the roadway to constitute a danger. Pedestrians may not suddenly leave the curb and enter a crosswalk into the path of a moving vehicle that is so close the vehicle is unable to yield. Pedestrians must yield the right-of-way to vehicles when crossing outside of a marked crosswalk or an unmarked crosswalk at an intersection. Where traffic control devices are in operation, pedestrians may only cross between two adjacent intersections in a marked crosswalk."


This particular incident concerns Ontario.

They should use a light system, stop sign, or yield sign, just like everywhere else that a car driving straight on a road is expected to yield the right of way.

Well, they don't. They have a crosswalk and they have a sign informing you that if a car hits you you will be found at fault.

Of course a lot of our congested places are categorically not designed to accomodate cars. And this has the effect of dominating the public realm and obstructing other users. Its like building an open sewer that overflows twice a day.

A road network that destroys access for other users is not functional. And sometimes the only way to achieve that is to reduce the volume of cars in city centres, lower speed limits, and have traffic calming measures. This helps align demand with what the roads are actually designed for. Pedestrianisation and pedestrian priority are just one logical extension of this.


> Wasn't the victim the real offender,

No. How can you write something like this without thinking realising what you're saying?


Why do you think I didn't think when I wrote it? You can't expect to start crossing the road with approaching cars, hoping they would accommodate you instantly. What if a driver sneezed while you are doing it, unable to react for a second or two or there was some sudden mechanical or other failure? Friend of mine was driving 55mph when an older man suddenly decided to quickly cross the road and he couldn't avoid hitting him, so he is now scarred for life from terminating somebody else's life and seeing it all realtime. It's obvious the Uber driver didn't pay attention, that however doesn't absolve the victim from being the primary offender. Think about it before you go emotional.

You're the one coming up with outlandish scenarios in response to a simple suggestion that failure to yield shouldn't be a death sentence. Who's getting emotional?

I'm sorry to hear about your friend but it's not quite the same situation if the driver had six seconds to react.

The point is that it's ridiculous to think of crossing the road as an offence against the driver, when it turns out so badly for the victim. It's more of a tragic error.

What if the driver had seen the pedestrian and swerved, flipped the car and died? Would that be an offense against the driver? The only reason we consider the pedestrian to be the victim here is because in this specific instance, that is who died, but there are no lack of cases where a driver tries to avoid a collision and ends up dying as a result.

That would also be a tragic error on the driver's part. Although I think it's a much rarer kind.

(I would also question that, except maybe in rural areas where there are hedges right up to the road, if you are at risk of killing yourself if something unexpected appears, perhaps your speed is not appropriate for the road, visibility, and lighting conditions? This applies to human and inhuman drivers)


There are risks associated with driving. You could hit a deer, or slip on black ice. Unintentional behaviour by pedestrians is exactly the same. It is just unreasonable to expect pedestrians to be completely attentive to safety at all times. That would require an unpleasently authoritarean set of rules. Do you really want public spaces to be so strictly regulated?

But here you have two legal persons (or more) - driver and pedestrian; it's not like inanimate subjects (environment) or legal non-persons (deer). If pedestrian does what the poster implied, resulting in killing the driver (+ perhaps the whole family, including small kids), why would you place the guilt/blame on driver? Where is justice in that?

I am sure you can find videos on Internet where pedestrians by their actions caused fatalities when drivers tried to avoid & save them.


As long as the act was accidental I don't see any need to assign blame or guilt to anyone.

This need for regulation simply dosn't apply to pedestrians, and strict rules just seem extreme and uneccessary. There simply isn't the evidence that pedestrian behaviour is a cause of death and injury among drivers. So why bother inflicting unpleasant rules and laws on people? Especially when the pedestrian themselves is likely to be the only person injured or killed.

Of course some places do have jay walking laws, and I would agree that people should follow the law. But those laws exist solely for the benefit of motorists and not the safety of pedestrians. I just don't see any strong moral case for those sort of rules to exist.

I think that cars create a uniquely weird set of circumstances. We impose a set of rules and restrictions on people that are more aggressive than would normally be acceptable. They are completely against free will and liberty. They are a total drag. And people reactively assume that the same rules should be applied to other road users. But that is categorically unfair and uneccessary.

Should drunk walking be banned? Should kids cycling to school be banned? Should we have a law that punishes careless walking? No, that is ridiculous. But we have to have those laws for drivers.


If you're crossing a road way outside the crosswalk, yes, it's completely reasonable to expect complete attentiveness comparable to the attentiveness of the driver.

Some people are unable to show that level of attentiveness. They may be disabled, kids, short sighted, drunk, etc. Should those people be morally and legally banned from the road? As a driver you just have to accept this kind of stuff when you drive in peoples communities. Unregulated humans are not always predictable. Sometimes accidents happen.

How about this situation where a pedestrian caused a driver to swerve and kill another pedestrian:

https://www.fresnobee.com/news/local/article120372008.html

Let's assume that the pedestrian was jaywalking.


If jaywalking is illegal then that person should be prosecuted for that. The rest is just an unfortunate accident (assuming that the driver was at the correct speed). It is horrible, but hardly evidence that strict regulation needs to be applied to pedestrians. This is a ridiculously unusual situation after all.

Well, to answer your question, no, the victim is not "the real offender." Some states (I don't know about AZ) recognize a "comparative negligence" standard where one party may be partially at fault but the other is majority at-fault and adjudicate civil judgments accordingly; others are all-or-nothing. This is a criminal case, so that doesn't enter into the equation, but I think it's a useful way of thinking about it. It may be that the pedestrian could have been more careful and not have been killed, but the operator of the vehicle failed to exercise a bare minimum level of care and is the more culpable party.

> The Tempe police report said the crash was "entirely avoidable" if the Uber operator, Rafaela Vasquez, had been watching the road while the car was operating autonomously.

Absolutely no surprises there. Everyone who has ever taken a picture in the dark should have known this after seeing the footage from the car.


The videos shot in the same location by another camera give another version of how well-lit those roads are:

https://arstechnica.com/cars/2018/03/police-chief-said-uber-...

Visibility looks considerably better than what the Uber video portrays.


I live there, that road is well-lit. The Uber video is misleading.

I'd suspect that it's harder to take over and avoid a colision in a self driving car, than a car that you're continuously in control of. You first have to recognize that the system is failing or about to fail. And that has to happen well in advance for you to take appropriate action. Doesn't seem like a reliable failover procedure, even if the person behind the wheel is paying attention.

> You first have to recognize that the system is failing or about to fail

For that to work, I'd like that cars that have features like this show you what they're seeing. Tesla does this a bit by showing all the cars it has detected in the dashboard. But it should go further and other car brands should do the same.

Without that, you're right, you can not be sure if the car is going to react to something on its own.


I'm legitimately surprised nobody has a HUD that simply displays all detected objects/potential obstacles. They have this data, just pipe it up to a HUD.

Nobody does it because it would clearly point out to everyone that the self-driving emperor has no clothes.

If you are watching a very complex display, why not present it in a format the driver will understand without training, such as a processed video image. Here [1], [2] are some (very optimistic) mock-ups. There was another one released in the last few days but I can't find the link. Showing anything like the raw data with rapidly changing risk annotation would get tiring very quickly, especially as it masks the view the human is actually able to understand, the raw view.

[1] https://futurism.com/self-driving-car-video/

[2] https://next.reality.news/news/augmented-reality-cars-compan...


I honestly don't know where Tesla's head is. I'm surprised that when I turn on the turn signal to change lanes, and the car has detected something in my blind spot via sonar sensors, that it doesn't give me a tone or something. Let alone showing me video from one or both of that side's cameras.

Tesla, in putting it's targets on self driving, has omitted a lot of low-hanging fruit that would make significant day-to-day difference to drivers right now.


Interesting. My 2013 BMW vibrates the steering wheel if it thinks I'm going to move sideways into something (and that includes fast moving cars coming from behind) or it beeps loudly if it thinks I'm going to hit something in front of my based on my speed and distance of the object in front. And its highly accurate IMHO. My take on self-driving is someone needs to hook up these signals to controls on the car - the "hard part" does a reliable job.

Similar, my 2015 Audi will rapidly blink "inside" lights on either side as soon as I turn on my indicator if there's something in a blind spot. It's also at least somewhat adaptive to their speed too (i.e. if someone is behind me but at the same speed, it won't warn, but if they're a few car lengths behind at a notably higher speed, it will).

The closest you can possibly come is to have your hands on the wheel and feet on the pedals and mime each action you think the car SHOULD be taking... but I suspect that would lead to a shitload more 'intervention events' being recorded for things the car was just doing differently to the particularities of the particular human driver.

Agreed it's too difficult.


This the problem; once you have to pay attention at that level (and I agree that you do to avoid accidents) whats the point of the automation? The taxing part of driving is not the physical act of slightly turning a steering wheel or break pedal but the mental processing power it takes to know when.

> This the problem; once you have to pay attention at that level (and I agree that you do to avoid accidents) whats the point of the automation? The taxing part of driving is not the physical act of slightly turning a steering wheel or break pedal but the mental processing power it takes to know when.

We are in between the technologies. That's litteraly beta testing. Are you saying that when you test your software you expect that everything that get out of it is gospel or you VERIFY and make sure it is?

This is a beta software, this is the testing, this is making sure everything is alright. You have to pay attention at that level BECAUSE it's a test, just like your software you wrote.

Once it's good enough (and we are still pretty far from it), then you won't have to pay that much attention.


Yes. Also, the whole "not paying attention" thing is encouraged by the task. It's completely predictable and these semi-autonomous systems are worse than nothing.

I crash a drone this weekend. I was looking at the controller and not to the drone about to crash. When I finally see it almost crashing, the panic kick in and was unable to stop it.

And this was just a drone...


Proposed solution: it should be an offence for the manufacturer to describe a car as "self-driving" or "autonomous" if it is not capable of doing so entirely by itself. Systems which rely on the car driving 99% of the time and then throwing up its hands in order to make the human responsible for the crash are a ridiculous abdication of responsibility.

This system would have to be described as "driver assist".


Do you actually think a single line of description are going to change human behaviour?

YES!! Unfortunately people take marketing buzzwords at face value. Since the industry is not yet ready to release a true "autopilot" then they should not be using these words. People die so that Uber, Tesla and others can ab-use these words.\

Someone died that day because that "safety operator" was watching TV? Imagine how 'extra happy' the cyclist's family is right now.


We already have plenty of evidence that this is true, too: people have died using the Uber and Tesla systems while nobody has taken the Volvo safety system Uber disabled as absolving them of responsibility for driving, and those systems have been in the hands of drivers for a number of years by now.

Not calling it an “autopilot” is a huge psychological queue that it's an emergency safety measure rather than something you should rely on.


It wouldn't have helped in this case, being a test driver for the technology she should have been paying full attention, and the same goes for all of these development drivers - but for the general populace when they release a product, I'd say it would.

"Autonomous" and "self-driving" (and "autopilot") are incredibly loaded terms, and manufacturers know it, and really do change the context of the interaction the customer can expect to have with the car.

"Driving Assistants" help you drive, "self-driving" means, well, it drives itself, and no matter how many caveats you apply to it, you've just primed your audience with your headline.

If the manufacturer expects you to be in control, then I really think they shouldn't knowingly sell it as something else.

I like a good analogy and I've been struggling for one, but let's make a tenuous link to the World Cup: if I as a football team buy an "autonomous" or "self-saving" or "autogoalkeeper" goalkeeper, my expectation would be what I bought would stop balls going into the net by itself. My expectation would not be to have to position a back-up defender next to it at all times because it jumps in the wrong direction when presented with an attacker coming at a certain angle.


Any vehicle that is not the highest level of autonomy (ie has no steering wheel, or no need for one) should have systems in place to verify that a human driver is alert, in charge of the vehicle, and able to respond immediately - eye focus cameras, steering wheel sensors, confirmation prompts, etc.

Is there a valid reason this should be law?


Then what's the point? Might as well just drive a car the way we always have then, and avoid all this complicated crap.

The point is to improve the vehicle until it is at the highest level of autonomy.

Well fine, but I'm gonna run old vehicles that havent been infected by this until then.

Having to babysit the self driving car is a no starter


Serious question, would you back-port the law to fully non-autonomous (subservient? dependent? allonomous?) vehicles?

For the counter, why do you think the false positive/negative rate on the attention-sensing system would be better than the false positive/negative rate on the eg emergency brakes?


Uber should get charged with manslaughter. I don't say that lightly. This isn't an example of programmer error, where a alpha-version self-driving system failed. That would've been unfortunate, but not grossly negligent. Here, Uber put a "driver" in control of the car that by design wasn't able to avoid collisions with pedestrians. According to the NTSB, Uber disabled Uber's own emergency braking system (not just the built-in Volvo one) “to reduce potential for erratic behavior.”[1] That rises to gross, criminal negligence.

It is no defense to say that Uber also told a human driver to be present. If Uber had directed a five year old to drive the SUV, with a back-up driver ready to take control, I think everyone would agree that qualifies as gross negligence notwithstanding the presence of the back-up driver. (To be clear, I think some risk in the name of progress is acceptable. But it's one thing to use testing on public roads to work out the kinks in the software. It's another to put vehicles on the road that you know cannot perform the basic, essential functions of driving.)

[1] http://www.latimes.com/business/autos/la-fi-uber-arizona-nts... ("However, Uber also disabled its own emergency braking function whenever the test car was under driverless computer control, 'to reduce potential for erratic behavior.'").


I agree. The system was clearly not prepared for a real-world test. "Human jaywalks with bicycle" is not a one-in-a-million event.

I'm not sure if this is legally possible, but frankly I think the humans made the decision to let it on the streets should be personally charged as well.


Yes, starting with the governor, who was very interested in bringing "self-driving" cars to the state without all the pesky regulation and testing. (There are plenty of examples of excessive regulation, but if there's one place where there should be some, it's when 2-3 ton autonomous vehicles are traveling down the road.)

If the car was expected to have software that was foolproof, why hire and pay a driver at all?

The unsafe behavior of the pedestrian is a red herring here.

The system was not prepared for the presence of people with bicycles on the road who are not crossing the road, but rather using it, and doing so properly.

There is no reason to believe that this car would not have struck a properly behaving cyclist.

The system did identify the cyclist as such but didn't react to the hazard.


I agree. We have this notion of creating corporations so that shareholders are not liable for anything. Maybe shareholders should be liable when they break laws and kill people.

The corporation can be charged as an entity itself, independent of any shareholder or executive: https://en.wikipedia.org/wiki/List_of_companies_convicted_of...

I understand that. In those cases, the only negative impact to shareholders is potentially losing money. Maybe shareholders should have more responsibility.

This destroys capitalism, you can't move the criminal risk to the shareholders. Increased monetary risk, sure: like corporations convicted of crimes have to give money in restitution before making shareholders whole; or even go the next step and make major shareholders liable for covering that restitution (if it exceeds assets on hand) in addition to eating the loss of stock. But for capitalism to function, investing must only lead to monetary penalties (but we could certainly make them stronger).

If you have to, move it to executives, it could actually justify the insane salaries and bonuses.


I am not convinced that investors having more skin in the game would destroy capitalism. Maybe it would destroy our local interpretation of capitalism. Maybe that is a good thing.

An objective debate over the future of capitalism is something our society needs.

No, this is the system we used to have in the robber baron era and it functioned. It just leads to the concentration of control of large enterprises in the hands of individuals and small groups.

I'm amused by the downvotes for a controversial question. Check out http://paulgraham.com/say.html

It's not that it's controversial, it's just that it's not a workable idea, or at least you haven't proposed any way to make it workable. That article, if I recall correctly, is about cultural taboos. I don't think this is one of those.

Stepping up the level of fines and corporate penalties to be greater than the expected gains from bad behavior is a much more workable solution. Companies were much more careful to not act too monopolistically when the government was in full trust busting mode.


You don't think it is a cultural taboo here on HN to question the wisdom of unfettered libertarian, free market entrepreneurship?

Yes, the fines should probably be increased and enforced. But if I, as an investor, decide to put $10 into Uber, I only risk my $10 if Uber kills more people and breaks more laws. It seems to me I should have more responsibility than that for encouraging such behavior.


Should the pensioners whose pensions are invested in index funds be sent to jail for Uber's actions? Where would this culpability end?

I don't know. I have wondered that as well. We currently have a system where the most wealthy benefit and have no skin in the game. That seems wrong to me. I welcome any solution that makes people think about the entities they support.

Great, so do those shareholders all get criminal records?

Maybe they should. The executives of Uber have the tacit approval of the investors.

Corporation, n. An ingenious device for obtaining individual profit without individual responsibility.

Until it isn't. If you can show that willful negligence on the part of specific individuals then a suit against them directly can go through.

Limited liability. If you can show that specific shareholders were willfully negligent then you can go after them directly.

That is not how it appears to function in practice. The investors of Uber are well aware that Uber is breaking laws. They are making a perfectly rational decision that it will pay off.

> investors of Uber are well aware

That's an extraordinarily huge presumption to make about someone, especially, in practice, an "investor" isn't even a natural person.

Moreover, the "person" who owns the shares may not be the one who made the purchase decision, such as with venture capital, hedge funds, and mutual funds [1].

Even an individual, natural person may buy a particular stock based purely on financials, or even just price (such as day traders). No awareness of business practices is necessary.

Then there's short-selling. Who owns those shares, at which point in time?

Most of these questions (ownership, not purchase decision responsibility) do need to have an answer for the purpose of corporate voting, but only occasionally and not to determine criminal liability for an arbitrary point in time. The latter would be remarkably impractical, if not unfair.

[1] Assuming one pierces that "veil" of ownership


At the bottom of those chains of ownership are individual humans. Our systems of law and commerce should serve people, not the other way around.

My understanding is that Uber is privately held, mostly by venture capitalists who expect it to increase in value once it monopolizes the market. We are not talking about a publicly held, well-governed organization.


> shareholders should be liable when they break laws and kill people

Shareholders shouldn't (unless there's clear mens rea, such as when they set up the corporation in order to break laws) but officers of the corporation should, and probably already are.


You can’t imprison a corporation.

Corporations cannot act, as they do not have hands. Only individuals can break the law.


You could revoke its articles of incorporation couldn’t you? Or put them under a consent decree if they don’t want that. Or perhaps force a regulator inside the company.

Just a high enough fine (and I mean HIGH) would still do something.

As it is there is no hit to Uber except their stock price for killing someone through gross negligence.


Criminal negligence seems to me to be more appropriate than manslaughter. They disabled safety features and replaced them with their own which failed to function correctly. They whole testing program seems to have been rushed and corners were cut.

Manslaughter is what you charge someone with when their willful negligence leads to a death.

The devil is in the details, so they say, but what you describe is not negligence. They didn't disable a safety feature and do nothing, they replaced a safety feature with something else. You'd need to make the case that this something else was somehow deficient in some material way and management shipped this feature knowing it was materially deficient for this to be negligence.

Why don't we instead start with the individual who was supposed to be monitoring the road and the vehicle's systems? It's well known at this point that they in fact were negligent in their duty.


they disabled both the built-in emergency braking system and their replacement emergency braking system. they absolutely did disable a safety feature and did not have anything in its place.

They still had a person in the car who was supposed to be operating, in effect, as an emergency braking system.

As you acknowledge, the human was only the secondary driver. That meant that Uber put a primary driver in the car that was designed to not stop for pedestrians.

I think we'll see subpoenas for e-mails/slack messages and people digging through documents trying to figure out which manager/dev made this decision.

One small mistake, on something so critical, can ruin many peoples' lives.


Negligence that results in death is manslaughter by gross negligence.

The Texas version requires recklessness, and I think that's met here.


The problem is that legally speaking these systems aren't treated as the primary driver, they're treated as an optional feature, basically just an enhanced form of cruise control, etc. Where they might get into legal trouble is if the driver was not properly informed of his duties. But I'm assuming it was pretty clear he wasn't supposed to be watching TV, he was supposed to be watching for the car to make a mistake and then correct it.

> Uber disabled Uber's own emergency braking system (not just the built-in Volvo one) “to reduce potential for erratic behavior.”[1]

I thought the emergency system was only disabled for the computer so that it doesn't accidentally activate it. I think the driver was still able to activate the emergency system.


I have a feeling in the software dev stack, they were like "Let's test out our computer vision. We can disable x and y. It's okay because we have a human driver."

Any assistant features reduce driver awareness. Take the Tesla car that crashed into a barrier. Even if the driver had hands on the wheel, the fact the car was auto-lane adjusting well for so long can relax the drivers cognitive functions. You start thinking about other things, and not even realize your lane assistant feature, improperly named "auto-pilot" lost its tracking and is now driving you into a barrier. You're dead.

Uber shouldn't have had these cars on the road, and they certainly shouldn't have disabled any safety features because they thought they could rely on drivers to stay alert after eight+ hours a day in these things.


Just out of curiosity, what if a road had concave shaped lanes such that the steering was always biased toward the center of the lane. Then at some point the surface changes. Would that have a similar effect of causing drivers to potentially crash at that point?

If not, then any driver assisted implementation should emulate the feel of a concave road (or road ruts) as much as possible. Make it so that the car is more comfortable to drive, but just subtle enough so that the driver's attention doesn't drift too much.


Honda has its own “Honda sensing” system which enables adaptive cruise control, line keep assist and emergency braking. If you are about to leave road/line ut vibrates the steering wheel to get your attention. If it thinks you are not holding the wheel it also warns you and after some time disengages.

The human driver is also at fault and maybe even more so than the company. Don't know what his instructions were, but probably the main reason to have him on the drivers seat was exactly to avoid this kind of situation. The guy should have been paying attention to the road, ready to take over, not staring at his phone completely oblivious of what was going on.

This situation in a way is very similar to another story that was on HN yesterday about a guy who got "fired by a machine". We are putting a huge amount of trust on automatic systems, sometimes without a failsafe mechanism. This inevitably leads to bad consequences.


Yes, and the individuals fail in that context (i.e., the driver is at fault), but the greater failure is the management decision to put the system in the wild and create serious risk other unuspecting and more importantly, un-consenting, people.

Anything less is skapegoating and letting the managers get away with manslaughter (in this case).

The whole lot needs to be prosecuted, not only the driver (who also should not get let off)


The problem is that no human can actually be that attentive to such a system for hours on end day after day. Uber set an impossible task for these drivers. We shouldn't let them claim the existence of these drivers as legal protection from their own obvious negligence.

> The problem is that no human can actually be that attentive to such a system for hours on end day after day. Uber set an impossible task for these drivers. We shouldn't let them claim the existence of these drivers as legal protection from their own obvious negligence.

Did the driver got a driving license? What does that driving license tell him to do in a car? That is FAR more important than what you boss tell you, it's the freaking law.

It's like saying if you kill someone in your car after a long day of work, you aren't responsible because your boss gave you an impossible task to drive your car after such long hour. You are still freaking responsible of your driving. Can't do it? Then don't do it, it's that simple.

Put as much blame as you want on Uber, it doesn't change the fact that she drove that car and killed that person.


I think you are missing the point a little as that is not an apt comparison. The "driver" was not operating the vehicle. They were tasked with monitoring it's function to step in and take control if something went wrong but humans are notoriously bad at this exact task. Now I definitely believe the person is culpable here but it's also murky based on how Uber instructed them to operate the vehicle and how much they said you can trust the autonomous driving features.

My understanding so far was that erratic behavior was the norm rather than sporadic exceptions and that it was common that a driver had to take over. I mean, we keep hearing about Tesla "self driving" cars that are really anything but, and I wouldn't be very surprised if Uber's SDV had similarly poor levels of autonomy. Also, it has been pointed out time and again that the dashcam dynamic range didn't reflect what a human would have been able to see if they were just looking at the road like they were supposed to.

> Here, Uber put a "driver" in control of the car that by design wasn't able to avoid collisions with pedestrians

To play devils advocate, doesn't this describe any profession involving regular vehicles (mail delivery, taxi drivers, etc)? If a taxi driver gets into an accident for dozing off and relying on auto-braking features that aren't there (either because he does in his personal car, but not in his work car, or a company technician forgot to reconnect it after a repair or whatever), whose fault should that be?


people do get charged if they run down pedestrians due to negligence.

Sure, but which people? The parent is saying Uber (the company) should be charged, but given the circumstances, I'm not sure if that makes sense.

>If Uber had directed a five year old to drive the SUV, with a back-up driver ready to take control, I think everyone would agree that qualifies as gross negligence notwithstanding the presence of the back-up driver.

This is a bad analogy. First, Uber's software is much better than a 5 year old. A 5 year old would have hit something well before a jaywalking pedestrian. Second, the backup driver was fully capable of stopping the vehicle, or navigating around the jaywalker. The person they hired is responsible for safe operation of the vehicle. They failed.


"Uber" should be charged with manslaughter -- but who specifically? The driver appears to be the most logical party if they were the one in operation of the vehicle.

Want results? The CEO of Uber should get charged with manslaughter.

A related BBC article[1] states:

"A toxicology test carried out on Ms Herzberg after the accident returned positive results for methamphetamine and marijuana.

She did not look before crossing the road in a poorly lit area and was wearing dark clothes, the NTSB report says. And the reflectors and lights on her bike were at right-angles to the Uber car's path."

Even though, the self-driving software failed to recognise her, and also totally not excusing the lack of attention of the Uber driver, you cannot rule out that under normal circumstances with a non-automated car, the pedestrian would not have been hit.

Although this is a sad event, the pedestrian does carry a certain amount of blame here. It also shows that the biggest blocker to effective self driving vehicles is people, not technology.

---

[1] https://www.bbc.co.uk/news/technology-44243118


What does it matter what drugs the woman hit by the car was on? It changes nothing about the case.

Looking at the video it's clear that at least some evasive action could have been taken by an attentive driver; at the least it should have been possible to hit the pedestrian at a lower speed. This article claims the car had six seconds to react (although I'm not sure what sense that's based on) and that the crash was avoidable if the operator had been paying attention, according to police.


Because drugs have side effects, which may have contributed to why she put herself in the position to be hit by a car that she probably should have seen.

But in this case we don't have to wonder about the circumstances; we know where the car and the pedestrian were and exactly what happened. Thus going into the toxicology report serves no real purpose other than unfairly casting aspersions on the person killed.

All of these reports take into account the intoxication of any parties involved. Not sure why it’s unfair to point out that the person illegally in the road was in a state that contributed to the accident. It’s not like she was just standing on the sidewalk minding her own business.

How did her "state" materially contribute to the accident?

Why are we treating the jaywalking here with such a light touch? If the pedestrian had crossed at a crosswalk, we wouldn't be having this discussion at all.

Jaywalking shouldn't carry a death sentence.

If the car can't handle a jaywalker, it can't handle a toddler running into the street. That's a problem that shouldn't be hand-waved away.


100% agree. Whether we like it or not though, sometimes it does mean death--even with normal cars. That's why crosswalks are there in the first place.

Can a human driver handle those things?

I had a near-collision when I was headed to work one day. I was going the speed limit on a major four lane road. I noticed ahead of me that a car was stopped in a neighboring lane. It wasn't clear if it had broken down or what, so I started slowing down. Cue a group of kids jaywalking, and running in front of my car causing me to slam on my brakes.

Had I not seen or reacted to the already stopped car (or had it simply not been there), I probably could not have stopped in time. Not that I wouldn't have absolutely tried my best, but physics dictates that a compact SUV going at 45 mph doesn't come to a stop immediately.

That doesn't mean we shouldn't evaluate whether Uber seriously messed up and should face dramatic consequences - it certainly seems like they disabled a critical safety system that would of saved the pedestrians life.

Rather, we shouldn't make such grandiose statements like "Jaywalking shouldn't carry a death sentence" which don't really touch on the facts of the case. We should instead ask, "Did Uber's negligence cause this to result in a death when it shouldn't of been one?"


> Can a human driver handle those things?

I'll point out that in this case the crash was "entirely avoidable" according to the official police report.


One of the scenarios that sticks in mind from learning to drive in the UK is the bouncing ball. Drive through residential area, ball bounces out in front, you immediately go for the brakes because there's a good chance there's a kid chasing it.

Does Uber do that? Does any of the more advanced systems (lidar, radar etc) do that?


If the car didn't detect and stop for a jaywalking pedestrian, how can we be sure it would have detected a pedestrian in a crosswalk under the same conditions?

Because this is an entirely possible scenario in a country that self driving cars might operate in where jaywalking is not an offence.

Have you seen the overhead picture of the area in question? There's a path through the median that she was crossing from. If they didn't want people to cross there, the median should be designed differently.

"A toxicology test carried out on Ms Herzberg after the accident returned positive results for methamphetamine and marijuana."

You know. So what?

I always think it's especially galling when, for example a cop shoots a black teenager, the police departments spinmeisters immediately start to smear the victim.

As if smoking marijuana is a character fault and makes the victim implicitly a crook with low morals.


People on drugs tend to behave in erratic ways.

In the case of a jaywalker, this might be as simple as crossing with no warning when a sober person might not cross. Any driver (human or machine) should of course do their absolute best to avoid a collision, but whether or not that collision is avoidable is determined in part by the object you collide with.

In the case of violence by the police, somebody who is holding a weapon and behaving erratically is inherently more of a threat then somebody holding a weapon and complying.

There's nothing per-say "evil" about drugs, but somebody with meth in their system is a hell of a lot harder to predict then somebody sober, whether that's in terms of jaywalking or a violent confrontation.

When we evaluate situations, especially emotionally involving situations, we have to try and gather the totality of evidence.


"In the case of violence by the police, somebody who is holding a weapon and behaving erratically is inherently more of a threat then somebody holding a weapon and complying."

Except that in a lot of cases the victims were unarmed.


I recently took up martial arts, and after sparring a few times, I've learned that you do not need to be armed to be a threat. If somebody drives an elbow to the back of your head with force, you're going to the hospital. If somebody manages to get you in a clinch, you are likely going to pass out. If somebody manages to get you on the ground while they're standing - you better really hope they're content to walk away at that point because they can permanently mess you up for the rest of your life.

Any case of self defense needs to be evaluated by a court. I'm not suggesting police should get a free pass, but that somebody is unarmed does not mean they are not a lethal threat.


I wouldn't clame that all cop shootings are in bad faith by the cops. Sometimes violence is necessary (alas, it seems to happen in the US exponentially more often than in other developed countries).

But if you have a gun and the person that you target hasn't then threats as you describe them usually don't come to play.

It especially doesn't explain that cops shoot people into the back, when they very clearly pose no threat.

And they often used to get away with it would it not be for video.

And still victims are smeared.


You have a pistol pointed at someone who you perceive to be a threat. The person charges you from 30 feet away. Do you shoot them? Does that change if they have a knife? What if it's dark and they're holding something that seems like a knife but you're not sure?

Holding a gun doesn't make you immune to being injured or killed. Pointing a gun at someone doesn't always make them comply. A gun is a deadly weapon, but it's not a token of immunity and compliance. Shooting someone doesn't magically stop them or cause them to become a statue or a corpse.

> cops shoot people into the back

You're a police officer. You're called to investigate a violent rape by a suspect armed by a knife. The suspect runs and is quickly getting away. Do you:

1. Shoot them in the back

2. Let them escape and possibly rape someone else, hoping you catch them in the future

Neither of those is a good outcome and, just to make things more confusing, different states and departments have different rules in those situations.

> still victims are smeared

You can't know who the victim is until after the chips have landed. If someone gets shot by a police officer, they might have been an innocent person doing nothing wrong, or they might have been a violent predator who sexually assaults and murders kids for fun. When we're trying to assess that situation, you need all the evidence to come out - whether that hurts the cop (How many times have they used their weapons before? Do they have previous misconduct charges? Have they made racially charged statements previously?) or the person on the other side (Do they have a history of violence and crime? Are they a known felon? Were they on drugs?)

There are a few YouTube videos of use-of-force activists undergoing police training (https://www.youtube.com/watch?v=yfi3Ndh3n-g) and one of the things that stands out to me is that they are very, very fast to pull the trigger compared to most police officers. That isn't to say their points are invalid, but it's easy to criticize from a sofa and it's a different thing to be in that fight-or-flight life-or-death moment of decision making.


Option 2. There is no other choice. Using lethal force against someone who does not pose an immediate threat is murder. It is never justifiable to shoot a fleeing suspect in the back.


> Under U.S. law the fleeing felon rule was limited in 1985 to non-lethal force in most cases by Tennessee v. Garner, 471 U.S. 1. The justices held that deadly force "may not be used unless necessary to prevent the escape and the officer has probable cause to believe that the suspect poses a significant threat of death or serious bodily harm to the officer or others."[2]

> People on drugs tend to behave in erratic ways.

This, however, is part of the problem.

She had traces of drugs _in her system_. Without knowing the amounts, you cannot say she was _under the influence_.

You can detect marijuana in the system several days after smoking a single joint. You're certainly not high at that point.

I'm not sure what the detection timeframe is for meth.

Certainly, if someone is _under the influence_ of a drug or drugs, that's one thing.

Similarly, I also work for Medic One in my area, training new EMTs. One of the perennial discussions is about being "altered", and consent, because we discuss implied consent in the context of intoxication. It has to be explained that just because someone has had one drink, does not impair them, or their ability to consent, and that tests more revolve around ability to comprehend consequences of decisions and actions. Simply put, I ask them, "You had a beer. Are you allowed to drive home?" "Yes, of course!" "So why would you not be allowed to make your own medical decisions?" "Ohhhh..."


Walking into traffic without paying attention and getting hit by a car is incredibly stupid. A sober person would be far less likely to do that.

Have you ever done anything stupid?

Sure, but fortunately nothing nearly stupid enough to get me killed.

The driver is at fault.

Yes, I’ve said the same as well. But to pretend like you don’t know why it’s significant that the victim was on meth is ridiculous. You have to be pretty fucked up to just walk in front of a car that even a blind person would know is coming

D.A.R.V.O.

Deny

Accuse

Reverse Victim and Offender

It’s a playbook that’s as old as people, and sadly it works, especially when there’s a significant power asymmetry at play.

That didn't happen.

And if it did, it wasn't that bad.

And if it was, that's not a big deal.

And if it is, that's not my fault.

And if it was, I didn't mean it.

And if I did...

You deserved it.

All of this is usually applied to narcissists, psychopaths and sociopaths, but it works equally well for immoral corporations.


Other videos show the area was not "poorly lit". That's only because of the potato cam video made available by uber personnel.

This was normal conditions.

1 the road was NOT poorly lit

2 self-driving software recognized obstacle just fine, but was programmed to DO NOTHING


So they'll drop it on the driver?! The whole setup was an accidental waiting to happen. Uber executives should be held responsible or this will happen again and again.

Right, Uber's position will be / is, simultaneously, one - "it's not our fault, that's why we had the human driver there", and two - "in order to cut costs, we used to have a human passenger there to record anomalies while driving - we axed that and require the human 'driver' to do so, whilst being responsible for the safety of the vehicle".

Paying a "safety driver" to sit in the car seems like a small price to pay if it means this (minimum wage?) person takes the manslaughter charge instead of Uber.

>> minimum wage?

Lol. You can get a real non-robot driver for that price. The whole point is to get some sort of contractor non-employee to work below the minimum wage.


Another facet of this is how quickly people adopt new technologies - whether they're proven to be safe (or good) or not. Personally, I've evolved into not trusting technology, not trusting the people behind most of the technology being produced today.

In contrast, this woman was so quickly at ease (I wonder what she was told beforehand, during her training) that she felt comfortable enough to watch TV. I also wonder, as this kind of tech progresses, how it will be sold to the public. Perhaps the same: "we take your security seriously"...


The NTSB preliminary report directly contradicts this article.

https://www.ntsb.gov/news/press-releases/Pages/NR20180524.as...

On scene police reports are often unreliable. This is why the NTSB does not speculate before the investigation is completed.


Can we put this issue to bed once and for all? Humans are not sufficiently equipped to act as a 'backup driver' in emergency situations and any system which relies on such a thing for safety is inherently unsafe.

Doesn't matter if you glue our hands to the steering wheel and hold our eyes open, if we're not doing anything 99% of the time we won't be ready to react with split second timing to recover from some failure.


Contrary to many here, I feel that it's perfectly reasonable to have a backup human driver if they are trained to act accordingly.

The Japanese pointing-and-calling technique comes to mind as a good example of keeping drivers engaged: they would have to continuously, actively point at dangers and at the car's appropriate response.

https://www.youtube.com/watch?v=9LmdUz3rOQU (quite fascinating to watch)

Combine this with short sessions (not driving around for hours with nothing to do), and I think the driver would have had a reasonable chance of preventing this accident.


> I feel that it's perfectly reasonable to have a backup human driver if they are trained to act accordingly.

Agreed. I think the right analogy is an airplane pilot. Operating an airplane and an autonomous car require special training and require professional operators. Replace car crash with plane crash. My judgement is predicted on - did Uber train their drivers? Did Uber evaluate their drivers before allowing them to operate on their own? What policies did Uber have in place for drivers? What punishment did drivers typically receive for violating policies? Did Uber occasionally review footage of the driver to verify driver compliance?

It's hard to expect a layperson to pay attention for hours on end without input, but somebody appropriately trained should be able to do so.


This incident reminds of a post Nicholas Carr wrote. We are offloading critical activities to automation but at critical times human expertise is needed to resolve dangerous situations. The driver put too much confidence in the automation and watched a TV show on her phone.

Not really surprising. You can't be kind of driving. We really should not allow L2 and L3.

Should be just L1 and then L4 and L5 where no driver is needed.


I had an Uber driver who was watching an extremely graphic and violent movie in his phone setup directly in his field of view. I had him drop me off early and reported it to Uber. Apparently it’s becoming more and more common talking to friends.

Legal | privacy