I'm not saying you're wrong because of that. I just wonder how far from "ready" we are, and how much of a gamble manufacturers are taking, and how much risk that presents for not just their customers, but everyone else their customers may drive near.
> Instead, the consensus has become that we’re at least 10 years away from self-driving cars.
I'm going to assume the founder of a self-driving truck company knows what he's talking about.
But at the same time, I have a hard time reconciling that with the fact that I sat in a car that drove itself all around San Francisco, dealing with a ton of edge cases.
Maybe we won't get to a 100% drive-anywhere-you-want-car in 10 years, but to be fair, a lot of humans aren't capable of driving a car anywhere either.
There are a lot of LA drivers who can't drive in snow, for example. I was one of them, until I got practice, and even then, I'm not that safe at it.
I think as long as we set the bar at "drive anywhere a car can go with 100% safety" we will never reach that bar.
But if the bar is at "drive as well as a human in most of the places humans drive", well, I've already seen a car that can do that.
if it was the case self driving cars wouldn't be on the road, I don't think we should aim for perfection, perfection will come. We should be looking for cars that make less errors on average than humans, once you have that you can start putting cars on the road and use data from the fleet to correct remaining errors.
> On the subject of danger. Literally everything we do is dangerous to some degree.
I don't disagree with your take and I'm a self driving car proponent, but I'm worried about what process we take to get there.
One thing I've taken away from the pandemic is that people seem to have no problem imposing their tolerance for risk on others. Seems like we are on a path to play this dynamic out again in how self-driving cars come to market unless that safety profile is really well controlled and understandable.
Even if at a population-level self-driving is slightly safer statistically than person-driving, there are enough edge cases to give me pause right now, and at the individual-level it may raise my risk either as a pedestrian or driver and certainly changes what is predictable behavior [1].
> although stuff like Toyota's unintended acceleration does not inspire confidence
Another thing that worries me are that these same companies are also working on self driving cars.
Worst of it is that everyone pretty much jumped into the race after Google. It also doesn't look like existing solutions work on a real time system. I'm a bit worried about being hit by a car because it was running garbage collection process and did not react quickly enough.
>I'm convinced the self-driving cars are still a ways off as well.
Technology-wise, absolutely they are. The problem is that in actuality, they aren't. Companies will continue to push as hard as they can for as wide of a launch as they can, while governments (and any kind of sorely-needed oversight) will be ages behind.
> So how do we decide when the automated car is 'good enough'?
This is actually a really interesting point. I don't think people appreciate how far accident rates have actually dropped for modern cars without self driving. Even at million-cars-per-year sales rate you will need years of data to prove that a single self-driving software+hardware combo is better than humans with high statistical confidence. Your development cycles would be decades-long, like in aviation, if you want to be sure you're actually improving.
> I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers.
That's a really low bar.
Take a look at Google Maps in San Diego when it rains. Accidents everywhere. A self-driving car won't get into stupid accidents because it's following too closely, driving too fast, hydroplaning, etc.
Toss in the people who are distracted, tired, or drugged out on medication, and your numbers are even worse.
We're looking at the end of humans driving cars within 10 years.
Conversations I've had where people have told me that self-driving cars will need to be 100% perfect before they should be used. Ironically, one of those people was an ex-gf of mine who caused two car accidents because she was putting on makeup while driving.
Anyway, based on Google's extensive test results, I'm pretty sure self-driving cars are already advanced enough to be less of a risk than humans. Right now, the sensors seem to be the limiting the factor.
> it seems to me that all the lessons that we have learned have been chucked out the window with self-driving cars.
I think it’s unfair to lump all self driving car manufacturers together.
The traditional car companies have been doing research for decades (see for example https://en.wikipedia.org/wiki/VaMP), but only slowly brought self-driving features to the market with part of the slowdown because they are aware of the human factors involved. That’s why there’s decades of research on ways to keep drivers paying attention and/or detecting that they don’t.
“Move fast and break things” isn’t their way of working.
> The fundamental problem of self-driving cars in cities is that they are expected to be 100% safe.
I feel like that might be built atop two deeper concerns:
1. People worry that ways and times they are unsafe (separate from overall rates) will be unusual, less-predictable, or involve a novel risk-profile.
2. If it's autonomous, then accidents kinda weird-out our sense of blame and justice. When it fails, is it always the owner's fault and liability--even though the workings are impenetrable to the average person--or does the manufacturer have some blame? Do we each imagine that outcome would be fair if we were the one on the hook for our car doing something weird we didn't even intend?
Using regular cars as contrast, #1 is something predictable--or at least we delude ourselves into thinking it's predictable--and #2 has less disturbing ambiguity.
> Let's say I own a self-driving car that's able to reliably park itself. I pay close attention the first few months to make sure that it's safe and reliable, and once it's proved itself I start trusting it. I've been as careful as can reasonably be expected, but one day the manufacturer pushes a mandatory, automatic over-the-air update that contains a critical bug, and my car suddenly starts misbehaving and I get ticketed.
But this is very easy. The self-driving is a component, same as brakes or lights. If your brakes stop working because of manufacturing issue and you crash into something it's the manufacturer's fault. Nothing new here.
> But must the standard for safety be higher than existing human drivers?
> We're net ahead already with auto-driving.
We don't really have enough data to know that. Really, we haven't even scratched the surface of figuring out if that is true. E.g. we have no idea what kind of emergent phenomena will arise when roads full of self-driving cars interact with each other.
History is littered with examples of engineers thinking they were 90% of the way there when they were actually like 10% of the way there.
> I am not sure if an autonomous car will be safer. I mean, can they even do panic breaking right now (For example, if someone or something jumps in front of the car?). You know, safety is often not about having the fastest reaction time, but also about having good anticipation that any decent human driver will develop in a short time....
I think any hopes that cars will be better than humans at it is quite naive...
Even if we assume that cars won't be as good at stopping from people jumping in front of them, I would bet the number of accidents that happen from something like that are significantly smaller than the number of accidents from human error like texting while driving, being drunk, or just not paying attention due to fatigue or whatever else.
Not to mention, how can you really even tell the difference between someone who's walking towards your car but will stop and someone who isn't? Not even humans can do that since we can't read minds.
>>> I've seen actual self-driving cars on actual roads.
No you haven't. You've seen supervised cars with pilots ready to take over when needed. No manufacturer yet has dared release a truly hand-and-eyes-free vehicle onto public roads.
> but there's just no way you can drive a car around private property or something without a license.
Yup, sure can. Farmers have been doing this for decades.
> HUDs are distracting and I would certainly be a worse driver personally if I had that in my car.
It's all in the implementation; consider this one: If a car is detected that is at a radically different speed from me, a red box starts to flash around it. I would certainly end up being safer, since it would let me change lanes/slow down long before my eyes and brain could detect such a speed differential. Avoidance becomes a normal maneuver, instead of an emergency maneuver.
> but if you have $50 billion to spend on the project, then no way should you not try to build a self-driving car with it.
What if you can't build a self-driving car with $50 billion? With $1 trillion? Should you just walk away while dusting your hands and going "well, we tried"?
Because that's where we are. The current efforts are still ongoing, and have been for decades. Few improvements from them are being distilled into consumer vehicles; we really need more.
Make people safer now, instead of "5 years from now".
> It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.
They need to be safer than me, not the average driver. I suspect the lower 50% of drivers cause > 50% of the collisions.
> I think I get less interested in self-driving cars the closer we seem to get to them. It honestly just stresses me out.
That's probably a reasonable response. These things have the nasty feature that they both require less input from the driver the more sophisticated they get, but also that they require more _attention_ from the driver the more sophisticated they get (a boring old cruise control system won't abruptly swerve into a concrete road divider, but a high Level 2 system will). And humans just don't work like this. "Don't do anything, but be ready to take control within a second or so when it does something insane" is not something we're good at.
>> I really want to see fully autonomous vehicles but I don't really believe we'll see them on the road in the next 50 years.
I agree with you 100 percent. DAS (driver assist systems) are getting to be very common. I've worked at a couple companies that produce them, though I did not work on those products. I rode on the highway in a car outfitted with prototype lane keeping, and it was not quite as steady as I'd hoped. They also had an option that would apply a light force to keep you in the lane but could be easily overcome (and I think went away if you used a turn signal). It felt like a sort of speed bump between lanes. It wasn't trying to be a really complex system, and probably has some real world value - in reasonable situations I'd like to be able to use both hands to eat some food from the drive through, while keeping my eyes on things.
I've also seen video (circa 2004) of an autonomous car driving 100kph down a winding dirt road, staying on its side of the road and automatically stopping. But again, the engineers wanted to do so much more with it, but the rational guys in safety would not allow it.
I also competed in the AUVS autonomous ground vehicle competition in 1994. I wrote a lane follower by taking 20-30 lines of pixels off an NTSC frame grabber on an Intel 486. The core algorithm was on the order of 100 LoC, and we took second place. Super simple algorithm, hardly intelligent.
There is a huge difference between a PID controller maintaining position in the lane with radar assisted cruise control, and being a fully autonomous vehicle fit of unaided driving on public roads. There is a whole range of system capabilities, and the public has no idea what's in any given car. Comparing highway fatalities/mile to national statistics covering all roads and conditions is bullshit and a certain company that's recently killed some people knows it.
But hey, we all want to be able to read a book on the road or have truckers take a nap on long hauls, so let's keep deluding ourselves that this stuff will be ready for prime time in the next couple years.
>It’s concerning that people aren’t thinking of attackers (or the govt) taking over your car first on HN.
Yup.
I am quite concerned at how capable newer cars are, sans an equal level of security and safety consideration applied.
There are going to be some ugly things happen, then we will see a correction.
For now? No way I plan on owning anything networked, drive by wire.
And so many things are coming long before we get great self-driving vehicles. In fact, I think they will be requirements to ease the transition to them, due to the reduced complexity when humans are not part of the equation.
"We have geofenced your car, due to erratic, incompatible, [insert irritating justification here], driving patterns."
"Your car has been speed limited for your safety..."
And yet it's literally in cars on the road.
I'm not saying you're wrong because of that. I just wonder how far from "ready" we are, and how much of a gamble manufacturers are taking, and how much risk that presents for not just their customers, but everyone else their customers may drive near.
reply