Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The dead Uber pedestrian where the AI dismissed the data as faulty and did not slow down at all.

I think every human would slow down if they see things that they cannot explain. An AI will not.

It's basically the same problem as when an image recognition AI is 99% sure that the photo of your cat shows guacamole.

Current AIs do not have a concept of absolute confidence, they only produce an estimate relative to the other possibilities that were available during training. That's why fully novel situations produce completely random results.



view as:

> dead Uber pedestrian

Elaine Herzberg was in dark clothing crossing a dark street well away from any crosswalk or street lighting. Would a human driver have performed better? From the footage I saw she was nearly invisible, I would have hit her too.

https://www.youtube.com/watch?v=ufNNuafuU7M

This was not a hard fail for the AI.


The exposure time and the dynamic range of the sensor affects the visibility of the person in that video - it is very likely that a non-distracted human would have performed better.

The vehicle was equipped with both radar and lidar. The victim was detected as an unknown object six seconds prior to impact, and at 1.3 seconds prior to impact when the victim entered the road the system determined that emergency braking by the driver was required, however the (distracted) driver was not alerted of this.


> the system determined that emergency braking by the driver was required, however the (distracted) driver was not alerted of this.

Why would the system notify the driver that emergency braking was required instead of simply braking?


"the nascent computerized braking technology was disabled the day of the crash"

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg


The footage shows how car's cameras saw the accident. Human eyes have much greater dynamic range than that, so I wouldn't assume that a human driver would not perform better. Something appearing pitch black on this footage could be well recognisable to you. Also, this car's lidar also failed to recognise the pedestrian so if this isn't an AI fail then I don't know what is.

This is absolutely a fail, a woman died. The question is whether or not this incident is an example of "fails harder than most average drivers" or a hard fail.

With the sensor array available to it the car should have done better, no question.

But to make the claim "fails harder" I would be looking for a clear cut situation where a human would almost definitely have outperformed the AI.

Human eyes do have miraculous dynamic range so we would likely see more. Can we say with 90% certainty that a human would have saved the situation?


"There's something out in front of me, an unclear shape right in my path, relatively far, 6 seconds out. I will drive straight through it instead of slowing down, because...[fill in your Voight-Kampff test response]". Well? Is that at least 90% human?

Moreover, try this dashcam video: https://youtu.be/typj1asf1EM It's 10 seconds long, and makes the pedestrian look almost invisible except for the soles.

However, when I took that video, both the crossing pedestrians were clearly visible, not vague shapes that you only notice when you're told they exist. So much for video feed fidelity.


Can we say with 90% certainty that a human would have saved the situation?

Yes, given the misleading nature of the dashcam video I think we can. This was not a pitch dark road lit only by headlights where an obstacle "appeared out of no-where". This was a well-lit main street, with good visibility in all directions. An ordinary human driver would have had no problem identifying Elaine as a hazard and taking the appropriate avoiding action, which was simply to slow down sufficiently to allow her to cross the road.

The backup driver in the car was apparently looking at their phone or some other device and not watching the road at the time.


Based on the evidence you have put forth your conclusion is logical and reasonable. You have convinced me that my statement was in error.

> The backup driver in the car was apparently looking at their phone or some other device and not watching the road at the time.

"According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review"

She was looking at a device, yes, but not her phone.

Uber put one person on a two person job, with predictable results.


After the crash, police obtained search warrants for Vasquez's cellphones as well as records from the video streaming services Netflix, YouTube, and Hulu. The investigation concluded that because the data showed she was streaming The Voice over Hulu at the time of the collision, and the driver-facing camera in the Volvo showed "her face appears to react and show a smirk or laugh at various points during the time she is looking down", Vasquez may have been distracted from her primary job of monitoring road and vehicle conditions. Tempe police concluded the crash was "entirely avoidable" and faulted Vasquez for her "disregard for assigned job function to intervene in a hazardous situation".

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg#Distr...


Fair enough, you're right, she was likely looking at her phone.

The rest of my point still stands though.


This isn't one of the actual driving cameras, this is a shitty dashcam with variable exposure, with the footage then heavily compressed. This is not used at all for the self driving.

Just adding this if you think we should somehow give Uber the benefit of the doubt here. They released footage from a pinhole dashcam sensor that is not used by the system, knowing fully well it would be pitch black and send the ignorant masses into a "she came out of nowhere!" chant.


LIDAR would have picked that up dead easily.

Just like LIDAR would have picked up https://www.extremetech.com/extreme/297901-ntsb-autopilot-de...

And just like LIDAR would have picked up https://youtu.be/-2ml6sjk_8c?t=17

And just like LIDAR would have picked up https://youtu.be/fKyUqZDYwrU

And just like LIDAR would have picked up https://www.bbc.co.uk/news/technology-50713716

These accidents are 100% due to the decision to use a janky vision system to avoid spending $2000 on lidar; and that janky vision system failing.


"Brad Templeton, who provided consulting for autonomous driving competitor Waymo, noted the car was equipped with advanced sensors, including radar and LiDAR, which would not have been affected by the darkness."

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

The car had LIDAR.


It had lidar and ignored an unknown object it was tracking. If that's not damning for the whole field, I don't know what is.

Yep, and it detected Herzberg in the roadway with plenty of time to spare.

"the car detected the pedestrian as early as 6 seconds before the crash" [...] "Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words, to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.”" [1]

'Oh well it was very dark' is not a factor in the crash that killed Herzberg

[1] https://techcrunch.com/2018/05/24/uber-in-fatal-crash-detect...


It’s $75,000 USD for the LIDAR sensor, a far cry from $2,000.

I think that was the price for Velodyne's 64 laser LIDAR. They've discontinued it and now the top of the line is the Alpha Prime VLS-128 which has 128 lasers and is ~$100K.

There are many other cheaper LIDARs, even in the Velodyne lineup, but they are less capable.


The car had LIDAR. It wasn't an equipment failure, it was a failure on the part of the programmers. They had programmed in a hard delay to prevent erratic breaking and the system was programmed to dump data whenever an object was re-categorized. The system detected the person in the road as an unknown obstruction and properly detected that it was moving into the road but it re-categorized that obstruction 4 times before correctly identifying it as a person. By that point, the velocity and LIDAR data had been dumped because of the re-classifications and the car only had <2 seconds to stop.

Seriously, that is hopeless. It is worse than I'd expect from a university project.

Nice story...if only it would match the evidence. The algorithm has detected the person 6 seconds before the crash, but as it didn't match any of the object classes conclusively, it took no action. You read that right: "there's something at 12 o'clock. Bike? Person? Unknown? Who cares, proceed through it!" If that's not a hard fail, IDK what would.

That's not what happened. The AI was not programmed to drive through anything. It was incorrectly programmed to dump previous data when the categorization changed. It correctly identified at each point that it was meant to slow down and/or stop but, by the time it determined what the obstacle was, the previous data was thrown out and it didn't have enough time to stop properly. In your example, it was more like "There's something that 12 o'clock. Bike? Person? Unknown? Stop!!" just before actually hitting the person.

The car did "see" an obstacle for over 6 seconds and did not brake for it, now someone is dead. You are haggling over semantics to make it look like this did not happen and/or this is not a bug. Atrocious.

(Or, more charitably, "oops, somebody forgot that object persistence is a thing" does not excuse the result)


What? That's not at all what I'm doing and you're being extremely disingenuous to suggest that. I'm simply correcting misinformation. The car wasn't programmed to drive through anything. It was programmed to throw away information. Either way, it's an atrocious mistake and I've even said, elsewhere in these comments, that the people responsible for that code should be held liable for criminal negligence. There's no need to lie about my point or my position to defend yourself. That's just silly.

I have misunderstood you then, and I apologize.

Then I forgive you and I'm glad we see eye-to-eye on this. Everyone should be appalled at Uber's role in this and their response along with the lack of repercussions for them.

The video released by Uber was extremely misleading. Here is a video on YouTube of the same stretch of road taken by someone with an ordinary phone camera shortly after Elaine’s death: https://www.youtube.com/watch?v=CRW0q8i3u6E

It’s clear that a) the road is well lit and b) visibility is far, far better than the Uber video would suggest.

An ordinary human driver would have seen Elaine & taken evasive action. This death is absolutely Uber’s responsibility.


> An ordinary human driver would have seen Elaine & taken evasive action.

Looks like this was a hard fail for the AI then. We can say with better than 90% certainty that a human would have saved the situation, probably would have stopped or avoided easily. My mistake.


"May have seen" is more appropriate as every day pedestrian get killed on well lit road by human drivers.

The assumption was for an ordinary driver, the expectation is that given sufficient lighting the vast majority of drivers would see and avoid a pedestrian. Most of the millions of pedestrian vehicle interactions daily go by without incident, one or the other party giving way, so this would be the normal expectation for an ordinary driver.

We can reasonably assume that pja is aware of the existence of abysmal drivers and fatal crashes that should not have happened. I doubt their intent was for "would" to be interpreted as "100%".


Which is also true. This is perhaps the underlying issue: "we expect cars to be safe, while also expecting driving fast in inherently unsafe conditions." In other words, the actual driving risk appetite is atrocious, but nobody's willing to admit it when human drivers are in the equation. SDVs are disruptive to this open secret.

Some humans drive blackout drunk. You overestimate our competence.

Legal | privacy