> We're a bit further away from "Cameras working just as well as your vision" than I think you're anticipating.
Well surely that's what we should be testing. Is a self-driving car system safer than a human, or not? You're correct that self-driving car systems lack a "general learning and intelligence model," but they also have several obvious advantages over humans, like potentially better-positioned cameras, a more reliable "attention" system, faster reaction times, potentially more "experience" with different road conditions and scenarios, etc.
> I’ve seen analysis that suggests the overall safety numbers for self-driving mode are better than human drivers, though I’ve not dug deeply enough to be confident in them.
All similar analysis I've seen compared all-road all-vehicle driving stats against autopilot stats. Of course, the correct comparison would be against luxury cars (with similar safety features) and only on the same kinds of roads that autopilot gets used on (i.e. primarily easier highway driving). As far as I can tell we are extremely far away from the hypothetical you raise where self-driving is better than human.
> Would you still want one if the data showed it was more dangerous than driving yourself around? We don't yet know if it will be
While we might not know yet whether self-driving cars are /already/ safer than human drivers, it is virtually impossible that they won't become much safer very quick. Simply because bugs in software can (and will) be fixed, each car gets an update, and the accident in question won't happen again, ever. Humans, on the other hand, cannot be updated easily, if at all.
Conversations I've had where people have told me that self-driving cars will need to be 100% perfect before they should be used. Ironically, one of those people was an ex-gf of mine who caused two car accidents because she was putting on makeup while driving.
Anyway, based on Google's extensive test results, I'm pretty sure self-driving cars are already advanced enough to be less of a risk than humans. Right now, the sensors seem to be the limiting the factor.
> I am not sure if an autonomous car will be safer. I mean, can they even do panic breaking right now (For example, if someone or something jumps in front of the car?). You know, safety is often not about having the fastest reaction time, but also about having good anticipation that any decent human driver will develop in a short time....
I think any hopes that cars will be better than humans at it is quite naive...
Even if we assume that cars won't be as good at stopping from people jumping in front of them, I would bet the number of accidents that happen from something like that are significantly smaller than the number of accidents from human error like texting while driving, being drunk, or just not paying attention due to fatigue or whatever else.
Not to mention, how can you really even tell the difference between someone who's walking towards your car but will stop and someone who isn't? Not even humans can do that since we can't read minds.
> we may already be hovering around or exceeding equivalent human metrics.
Okay, bearing in mind your second paragraph, what are the conditions under which they're safer? I've been in a few self-driving cars and I'd struggle to see how they would ever get to an acceptable standard - like, pass UK driving test kind of standard.
>I think it's safe to assume that this will drastically reduce driving related injuries and deaths.
This assumes that the self driving tech will continue to increase in competence and will at some point surpass humans. I somehow find that extremely optimistic, bordering in on being naive.
Consider something like OCR or object recognition alone, where similar tech is applied. Even with decades of research behind it, it really cannot come any where close to a human in terms of reliability. I am talking about stuff that can be trained endlessly with any sort of risk. Still it does not show an ever increasing capability.
Now, machine learning and AI is only part of the picture. The other part is the sensors. This again is not anywhere near the sensors a human is equipped with.
From what we have seen in the tech industry in recent years is that trust in a tech by the people, even intelligent ones such as people who are investing in it, is not based on logic (Theranos, uBeam etc). I think such a climate is exactly what is enabling tests such as these. But unlike others, these tests are actually putting unsuspecting lives on line. And that should not be allowed..
> the proponents of self-driving cars tout the fact that computer driven cars are way safer than human drivers. And they're not.
It sounds like you conflating two claims: safety(computer car) == 100% and safety(computer car) > safety(human driver). Refuting the former (which is easy) does not refute the latter, though the converse would be true of course. But is it true that human driver is safer or the same?
> because if they can brake 100ms faster than a human would, in practice it makes no difference
That claim would be stronger with some better substantiation. I think it makes sense that faster reaction time and better focus leads to better safety record. That's why we prosecute people for DUI - because we have reasons to believe that slower reaction and more distracted driving when under influence leads to worse safety. Consequently, faster reaction and less distracted driving would lead to a better one. It will still be imperfect, but likely less imperfect than human driver, with lesser failure rate.
> What would make a difference would be to be fully aware of anything that might happen,
You make impossible demand - no mechanism or system could be ready for "anything that might happen", and nobody - at least nobody not having super-natural clairvoyance - could be aware of absolutely everything everywhere. There will always be weird coincidences and hidden things, the question is how fast the system can adapt and find a safe(r) solution.
> Self-driving cars don't have three things that humans have: instinct, experience and contextual awareness
They certainly have a system analogous to "instinct", and they can learn (even if not in the same way as humans, but who says it's the only way?). It is true that they don't really understand the context, but I wonder how important that is compared to the speed of reaction, lack of distractions and ability to control the vehicle? I would estimate most of the accidents happen because the driver either didn't notice something, or didn't react in time, or reacted not in the way that is optimal (like slamming the brakes on an icy road, or swerving too hard and losing control). A computerized driver would be much better on those dimensions. It would be worse on deciphering complex and unusual scenarios, but I don't think most of the accidents are of that nature.
>Even if the self-driving system is relatively good, is the human+self-driving system really better?
Say for example, the designers never trained the car's system to recognize someone on a skateboard as an obstacle.
Obviously a lidar based system would spot the skater, but a camera+ai system depends on being trained to identify each type of obstacle as an obstacle and avoid it.
> The fundamental problem of self-driving cars in cities is that they are expected to be 100% safe. They could be 10 times as safe as regular cars, but it wouldn't save them from headlines of "robot car kills an innocent person".
Well, yes, because all the proponents of self-driving cars tout the fact that computer driven cars are way safer than human drivers. And they're not. Those 10 times safer than humans have no actual meaning in practice, because if they can brake 100ms faster than a human would, in practice it makes no difference. What would make a difference would be to be fully aware of anything that might happen, have solutions for every possibility and react accordingly in I don't know, say 20-30 milliseconds. So when a self-driving car hits and kills a pedestrian it is a failure on the self-driving system (assuming the pedestrian just doesn't jump in front of the car out of nowhere).
Self-driving cars don't have three things that humans have: instinct, experience and contextual awareness. No matter how many miles they drive, they can't learn as humans do.
> Most people think they're a better driver than average
If he doesn't do any high risk driving the relevant risk comparison is much different than general population.
Driving skill is not relevant to his point (you could assume he's an average driver who doesn't engage in high risk driving). If he's under 50 and healthy that's probably a considerable risk decrease (reflexes/vision and less chance of medical condition while driving).
So even if self driving cars were safer than human drivers in total, it could be significantly more dangerous for him.
> Whatever its flaws will be, they will be consistent.
> But what if self-driving cars were 10x safer than human-driven cars?
But there is no evidence of this, or that they're even as safe as human drivers. In fact, Teslas are involved in more accidents than other luxury cars.
> What if I drive 5x better and safer than the "average driver"? Doesn't that mean that these cars would now put me in more danger than I would normally be in with my own manual car?
Because you are unlikely always that.
Are you tired? Do you have a cold? Are you hungry? Are you playing with the radio? Are you having a conversation with a passenger?
The huge advantage that self-driving cars will have is integrated systemic control from multiple sensor sources and multiple transducer outputs--and they will never get distracted.
Human drivers have mostly 1 input: vision.
Human drivers have mostly two outputs: hand and foot.
The lag in that system from input to output is >100ms even if you are perfect (I will grant you an exception if your name is Michael Schumacher).
Automated systems will have far more inputs. Multiple vision sources. Multiple radars. Temperature readings. RPM readings. etc.
Automated systems have far more outputs. Steering is the same. Brakes on individual tires is the big starter. With electric, individual suspension becomes feasible. Transmissions change form and become electrically controlled.
And the lag in the system is probably <10ms. That's huge. That's the difference in reacting in roughly 3 meters (2.7 meters) vs 1/3 of a meter at 100kph (9 feet vs 1 foot roughly for 60mph).
Finally, automated systems will take subtle preventative actions all the time. They will "see" the child on bicycle on the sidewalk (the idiot detector as Google called it) and "know" that they need to find him again when they attempt to turn right or slow down significantly. They will see the dude on a skateboard and know that they need to give him an extra couple feet if they can in order to deal with him falling. I know very few drivers that are that diligent.
> they'll be objectively better at X than humans, without any evidence to backup the assertion.
Google's road tested self driving car is already safer[1] than a human driving. Suggesting that a computer will be a more reliable processor of data and computer of maths than a human is not something which needs data to back it up. The ability of drivers is so variable and in the aggregate there are so many of them that it's almost self-evident that a self driving car which crosses a very low threshold for entry ("being road legal") will be better than a human.
> They can't possibly account for every situation or scenario - but people hand-wave and say it magically will.
Nobody is saying that they will any more than people argue that autopilot on a plane will. It's very plain to see that right now, as of this second, there is a self-driving car which is safer than a human driver. It is not yet legal to buy, but it doesn't change the fact that it's safer. It may be that a bug crops up which kills a few people. But that doesn't make it less safe, it makes the cause of death for some of the users different to "human error".
> Most of us are going to expect nothing short of perfection from these machines to really trust them.
I hope not. I'd like them as safe as possible, but I already expect that they'll be better than human drivers, and that ought to be sufficient to allow them.
However, there's another issue in play here: in addition to the possibility of holding self-driving cars to a higher standard than humans, people like the feeling of control, and will feel "better" about an accident where they see someone to blame.
> People used to say this about every single thing that computers can do better than people.
But computers can't do it better than people! That's what drives me nuts about this debate -- it's just accepted as a premise that either the self-driving cars are much safer than human drivers, or the path to getting them there is very close and no serious obstacles remain. Neither is true and it's not clear they will be. https://blog.piekniewski.info/2017/05/11/a-car-safety-myths-...
> When Tesla or any other company can prove to me that their cars are safer than human-driven vehicles, I won't oppose them on my roads.
How can they be expected to prove their safety record to you without driving hundreds of millions of miles on public roads? They're already provably safer than humans in artificial "test course" scenarios.
Well surely that's what we should be testing. Is a self-driving car system safer than a human, or not? You're correct that self-driving car systems lack a "general learning and intelligence model," but they also have several obvious advantages over humans, like potentially better-positioned cameras, a more reliable "attention" system, faster reaction times, potentially more "experience" with different road conditions and scenarios, etc.
reply