Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Apple engineer killed in Tesla crash had previously complained about autopilot (www.kqed.org) similar stories update story
606 points by jelliclesfarm | karma 11906 | avg karma 2.42 2020-02-11 21:33:13 | hide | past | favorite | 919 comments



view as:

Question: if the guy thought the Autopilot was failing, why did he continue to use it?

I have used Tesla’s autopilot...it has its moments...I don’t think it can make decisions within 10 second windows before obstacles.

It’s also very dodgy at certain merges. I take it off autopilot.

On a slightly diff note: I wanted a demo of auto parallel parking and also reverse parallel parking and summon feature after the last update. I went to the Tesla showroom. They were super nice but refused to demonstrate due to liability issues. They stood aside and a little away ..gave me instructions. Worked like a charm and it was awesome...but didn’t inspire confidence.


I rented a Tesla Model S from Enterprise recently and learned they don’t enable Autopilot due to liability issues.

This is good to know. I was planning to rent one to show my family when they visit.

Autopilot or the Full Self Driving features? Autopilot itself is just lane assist and adaptive cruise control: the lane-changing features aren’t part of it.

From previous reporting on this incident in 2018:

> “The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision,” Tesla said. “The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.”

https://www.mercurynews.com/2018/03/30/tesla-autopilot-was-o...

Also, the safety barrier that would've reduced the force of the impact had not been replaced after a recent accident just days before.

> Earlier this week, Tesla said the divider was missing a safety barrier designed to reduce the impact of a crash. The automaker provided an image of the location taken a day before the collision that appeared to show the barrier had not been replaced following a recent crash.

Reddit thread and Youtube video with a Model S driver reproducing the AP behavior due to the lane markings and uneven pavement:

https://www.reddit.com/r/teslamotors/comments/899i1w/my_mode...

https://www.youtube.com/watch?v=VVJSjeHDvfY&feature=youtu.be


It's the curse of SAE Level 3. The driver is responsible for monitoring a system that works fine 99% of the time, they start to feel a false sense of security and when it does fail they're not paying as much attention as they need to be.

This is also why SAE autonomy levels are a poor measure of a vehicle's capabilities, but that's another rant.


It's still safer than not having it.

Not if it requires constant supervision, yet lulls the driver into a false sense of security such that they are predisposed to not provide that constant supervision.

Arguably no, it's not. This is why Waymo stopped working on level 3 and switched to level 4/5 systems. It's also why most vehicle manufacturers up until now have only supplied ADAS features which provide backup to the human driver (lane departure warnings, emergency braking assistance etc.) rather than features which drive while using the human driver as backup.

There are a few coming out soon (SuperCruise, Pro Pilot) that will offer some degree of level 2/3 operation. It'll be very interesting to see how that goes liability-wise.


I heard of a similar situation, there are times when trains almost drive themselves so the engineers don't need to pay much attention. The problem is when things go wrong and they need to act. Sometimes the engineers are so involved in other things that they fail to act.

That's one of the theories given for the cause of the 2008 Chatsworth train crash.

https://en.wikipedia.org/wiki/2008_Chatsworth_train_collisio...


He needed to finish KrazyKupKakes level 18.

That's the real question.

The title should be "Apple engineer killed in Tesla crash had previously complained about autopilot but continued to use it and didn't have his hands on the wheel when the crash occurred."


Tesla says autopilot works but that driver should be ready to intervene. This is terrifying - like having a kid riding shotgun that might just reach out and turn the wheel while you are on the highway.

Anyone who has driven with autopilot, how quickly might it react to a perceived obstacle? Would it take a hard turn into a median faster than a person could reasonably react if it thought there was something in the road or that the road took a hard left?

Edit Ready to intervene can mean a lot of things - ready to take over in traffic or inclement weather vs firm grip on the wheel to resist rogue hard turns

Actual quote from the article

>Tesla says Autopilot is intended to be used for driver assistance and that drivers must be ready to intervene at all times.


From my experience, not a hard turn. The worst was it was raining and inconstant lane markings around a turn. I wondered if it was going to make the turn or if it was going to make it too late towards the sidewall.

It also takes very little interaction for it to pop out of self driving


> It also takes very little interaction for it to pop out of self driving

That itself could be dangerous if it isn't very obvious to the driver that autopilot is no longer engaged.


Awareness of who is in control is a skill that you develop. There are audio, visual, and tactile queues.

When it disengages itself it issues a series of loud tones and the display flashes red. A message also appears saying “TAKE OVER IMMEDIATELY”.

In cases where the driver takes over, there is audial, visual, and tactile feedback letting you know it has been successfully disengaged.


I can't help but think "ready to intervene" would require an even higher level of alertness than just driving the car yourself, as you have to always be prepared for it to do something completely unexpected on a dime- and to notice without the feedback you normally get through the wheel, pedals, etc...

Even worse, if you are taking an action the autopilot should have taken but didn't, you are axiomatically already late no mater how fast you react.


"you have to always be prepared for it to do something completely unexpected on a dime"

This is inherently absurd, even if you remove the last three words. You could write an essay, or a book, on why this is an example of a class of impossible problems. Or, inverting it, it's practically the generic framework of all catastrophes.


You are being downvoted because your reply is basically an unnecessarily long "nuh-uh!" - a dismissal without any counterargument. Wrapping that in a string of fancy words does not change the fact that you are not providing any argument at all.

As far as I can tell, the previous commenter was in agreement with the sentence he quoted, going so far as to say that the “on a dime” stipulation isn’t even necessary to consider the idea that we “must always be prepared for it to do something unexpected” an absurd reality

Hm, after reading and re-reading the post half a dozen times, I realized it can indeed be interpreted that way. Now I'm confused. :)

I agree with the sentence quoted, and wanted to amplify it by noting that when something unexpected happens, even if it happens rather slowly, it can be impossible to deal with it. Because when you see that your assumptions were at least partially wrong, you likely start to question all, or most, of your assumptions about everything. Which can paralyze you for an extended amount of time or lead to just thrashing around.

Yup, looks like I misunderstood your use of "this" at the beginning - I thought you were referring to the post you were replying to. My bad.

It reminds me of an article I read about Air France flight 447 (I think it was this: https://www.vanityfair.com/news/business/2014/10/air-france-...). The crux of the article is that when you can automate away a problem 99% of the time, you're much more likely to have trouble when the now-rare problem actually crops up.

It is a flying problem known as "The Children of Magenta"

https://www.youtube.com/watch?v=5ESJH1NLMLs


I'm expecting legislation is eventually going to mandate the driver be alert and able to take control of the vehicle at any time, but this feature is just inviting reckless behavior (1).

1. https://abc7.com/5488646/


It really doesn't though. I wish people who were commenting on this were able to try AP for an extended period of time, enough to get used to it.

It makes the driving experience 10x better at least. It's weird to describe but you can almost "feel" the AP working. I've only had to intervene a couple times but when I did you can instantly tell when you need to, and it's quite easy to do. It's almost like you're a supervisor who is overlooking everything rather than micro managing every little adjustment to the steering wheel.


I have autoplilot on the VAST majority of the time. It does not make any sudden movements. If anything, it brakes a little heavier than I would prefer at times, but there are rarely any steering problems.

It has improved a lot in the 9 months I have had it.


>It has improved a lot in the 9 months I have had it.

A system like this should not have noticable changes in behavior over such a short period of time. In other words, I don't think they should launch it (let alone call it "autopilot") until it's done and stable.


A system like this which isn't constantly improving is not a system that will ever ship, or ever reach truly reliable widespread use.

I agree that it should be improving. I don't think that said improvements should be so dramatic that they are easily perceived by humans, for that implies that the shortcomings are perceivable.

I think you are under-appreciating the amount of perception that goes into driving. We are accustomed to making constant micro adjustments constantly while driving.

Under autopilot, there are times when the "do I need to make an adjustment" instinct kicks in and you decide that you don't, but it was borderline. Things like being centered in the lane while passing a semi.


Perhaps you're right, I've not used any driver assist before so I'm not speaking from experience.

Second-guessing the system while driving just does not sound like something I'd want to deal with (as someone pointed out above [0]).

[0] https://news.ycombinator.com/item?id=22305695


It's not second guessing. It's more real-time than that. You are actively navigating and piloting the car, you just aren't performing the mechanics of executing most maneuvers. Sometimes you provide some extra direction. Sometimes you intervene.

It is a skill like driving. But it is not driving. It was way less frustrating and draining than driving and so is a very valuable feature. But we are all in agreement that it is not autonomous driving.


I use autopilot daily. It drives me, on a twisty road, from my house to work with intervention when I get into town.

> how quickly might it react to a perceived obstacle?

I've had it head towards a curbed median in the middle of the road and within 10 feet swerve to the right to miss it (in a spot that I've driven by hundreds of times, but this time it kinda decided to head towards the curbed median). Auto pilot can react surprisingly quickly in a good way.

So far, for me, it has never swerved quickly towards anything. There are situations, like on blind curves in the mountains, where it doesn't know what to do. At this time, it will start beeping, showing red on the screen and try its best to keep driving: even if that means quickly heading off the road. But, it doesn't act aggressive in any way. This is what I've experienced so far.

I've had auto pilot take me, hands free, between freeway interchanges, merging, changing lanes, etc. and I'm totally amazed. At other times, it does seem to do dumb things and I wonder what it had seen to consider taking the action it took.

Even with all that, there is still a huge advantage to auto pilot. Driving with it on seems to remove >a lot< of micro adjustments that I would normally be taking. I do drive with my hand on the wheel (enough pressure so I don't get the warning messages). On long drives, I really am a lot less tired: even keeping my hand on the wheel all the time.

I also have more time to scan the road for anything going on. Looking down to see my speed, for example, (basically any action that takes my focus off the road for a second or two) is less stressful.

It is utterly amazing technology and I still can't believe it is happening while I'm alive.


Thanks! I haven't read many firsthand accounts.

There are several youtube accounts (like "tesla driver") for solely this purpose - to show off how the autopilot behaves, in different situations, with hands or no hands, on long or short drives, in city, on highway, between the different autopilot update verions, in different weather etc. etc. If you are really interested you can already find a lot of firsthand information, with video no less so you can see exactly for yourself what the situation was and how AP handled it.

> Anyone who has driven with autopilot, how quickly might it react to a perceived obstacle?

Faster than me, sometimes. There was one time when I was driving through southern Illinois and some idiot decided to cross the highway. I was passing a semi, who was in the right lane, and doing around 10mph faster than him.

The car starts flipping out, beeping and flashing red. I see a car that apparently appeared out of no where, perpendicular to me and pulling across the highway. She had made it — barely — in front of the semi, but didn’t see me.

The car saw her before I did. I think it was because of the radar. But whatever, it braked hard and I barely missed her.

Saved my ass.


> Anyone who has driven with autopilot, how quickly might it react to a perceived obstacle? Would it take a hard turn into a median faster than a person could reasonably react if it thought there was something in the road or that the road took a hard left?

The only thing my car has ever abruptly/unexpectedly done was brake, usually in response to an overpass it mistook as a stopped car. It's happened maybe 10-20 times in 10,000 miles. Not enough to cause a rear end collision or even an angry honk. I've always had time to take over.

It's never taken a hard turn. The only times it catastrophically failed were when I knowingly put it in confusing situations: winding roads, poorly marked lanes, city driving, etc. In those cases I always have a death grip on the wheel, and the software seems to literally loosen its hold when it's feeling uncertain.


Exactly like autopilot on a plane! :o

There is a big difference between a plane (or a boat, where autopilot is also common) and a car.

A plane or a boat is roaming in a vast, mostly empty space, in a straight line. Other planes/boats are both pretty far away and also following a mostly straight line.

On the other hand a car operates in a very packed space, full of obstacles all moving in sometimes unpredictible way.

That makes a huge difference when it comes to autopilot.


I'm reminded of the Apple debacle in the past, when holding the phone a certain way affected the antenna negatively. But in almost every ad it's being held that way. With Tesla ads they always show the hands off the wheel.

> how quickly might it react to a perceived obstacle?

Very quickly, you have a split second to avoid whatever it decides to steer towards. The problem becomes even worse after you factor in that autopilot only works in easy to drive conditions (i.e. highways with high speeds). To make things even worse, you get less attentive when you are passive for a long period. I.e. if the autopilot fails in your first 20 seconds of driving you may very well be able to react, but if it has been driving perfectly for 20 minutes, you're probably already lulled into a passive state, and way to slow to react.


I have a model 3 with autopilot and it takes very little pressure in the opposite direction on the wheel to disengage autopilot before it gets very far.

Don't text and autopilot on level 3, which is a nascent technology. An engineer should know better. Wonder why a level 3 is allowed to market as an 'autopilot'.

> An engineer should know better

Maaaaybe? I'm not a hydraulic engineer and won't pretend to be one. So I hire plumbers when things go wrong. If they screw up, I don't tend to blame myself though. I think the same logic would apply to the deceased.


I feel that an engineer in almost any field should understand failsafe vs fail-deadly. Tesla's autopilot is definitely not a failsafe system.

While this is somewhat callous, if this engineer had taken that seriously he would still be alive.


Or if Tesla had

Nobody at Tesla claims that this is a failsafe system. In fact, they specifically tell you to be ready to take over at any time. That is not a system I would rely upon, personally.

Are companies bound to only selling failsafe products? Of course not, we'd never get anything done.

I do find it somewhat immoral that Tesla are using their customers as beta testers for this, but nobody has been forced into this. Nobody is forced to use Autopilot.


I think something about responsibility changes at scale.

If a Tesla was a bespoke machine, I'd have no issues. But they know that if they position a feature and posture just-so-and-so, a certain fraction of their customers will get it "the wrong way". What did they do to alleviate this?

Nothing, in my not so humble opinion.


In California every building has a sign affixed that warns of carcinogens within, does that mean you never go indoors? No, it means you learn to ignore safety warnings.

This is the real problem. Tesla had been aware of the "crashing into barriers" issue for a long time.

For some reason "Apple engineer" is used a lot (see title) in discussions around this on both ends (i.e. "he was an engineer and thus knew his stuff/was credible, and he voiced some concerns to friends/family, so let's agree with it being Tesla's fault" vs. "he was an engineer and should have known/anticipated the system's caveats"). The thing is you cannot have it both ways, so I think (and I think we agree) we should not appeal to the driver being an engineer at all.

Being an engineer should only add value to his comments about an engineered technology. His complaint was made from a position of software expertise and familiarity with technology.

His inability to anticipate the system's caveat that led to his death is a testament to the unpredictability (from the user's perspective) of the vehicle's behavior.


> Wonder why a level 3 is allowed to market as an 'autopilot'.

The argument I have heard (Which I disagree with) is that "Autopilots" in aviation (where the term comes from) does not mean "You can go to sleep" but still needs constant monitoring.

Therefore calling this "Autopilot" is technically correct and therefore accurate.

My argument that the technical definition and the definition known by the general public (Which seems to be closer to the sleepy-time definition) is different enough that calling it "Autopilot" is dangerously misleading.


IMHO the fault ultimately lies with Caltrans. Poorly drawn lane markers resulted in a human driver hitting the barrier just days before destroying the arresting system. If the lanes had been painted correctly and the arresting system replaced the Tesla driver would not have died.

Oh yeah, you should come drive on Montreal roads. Lane markers don't exist.

This is all conjecture.

You don't know that the driver would still be alive if the lanes were painted correctly etc.

Autopilot could still misidentify the barrier as we've seen it do in other situations.


Five drivers hit the same barrier the year before the Tesla accident and a second human piloted vehicle fataly struck the same barrier just two months after the Tesa crash.

It is absolutely a road design and maintenance issue.


How many drivers didn't hit the same barrier that year?

It just seems like an edge case that's catching inattentive drivers off guard, which IMHO is a category including Tesla's on autopilot. And it seems Tesla agrees here; they require drivers pay attention when using Autopilot. If your vehicle runs into a barrier on Autopilot, you weren't paying attention.


It’s also a road design and marking issue. But poor roads and markings are normal, outside the biggest cities.

For a good portion of the year, you can’t rely on being able to see any road markings in the Midwest and Pacific Northwest.


I think an auto driving system needs to have a higher margin of safety than a human driver, not the same or worse.

People say a self-driving car is safer than the average person. I say, well, I'm above average, because I don't drink or text while driving, for starters. So people say "well you just think you're a better driver - everyone thinks that and on average they're wrong". And I say, assume I'm lying about not drinking or texting, it's still a lot cheaper and more efficient to stop doing that than to buy a new car with the latest tech.

If a self-driving car isn't safer than a sober professional driver, then wouldn't it be logical to just have people ride the bus instead of wasting resources on tech? I knew a bus driver who went a million miles without an accident.


Liability should always lie with the self driving car. In the real world weird shit happens on roads, people adapt and are expected to adapt. What good is a car that I can't trust to be at least as reliable as an overly defensive version of myself?

If human drivers are getting killed in the same spot the same is it still a vehicle saftey problem?

How many Teslas on lane keep assist safely drove past that spot vs how many humans?

If every million human drivers have one accident and every 1000 Teslas have a similar accident then yes, it is a vehicle safety problem.

An auto driving system needs to have a higher margin of safety than a human driver not the same or worse.


Yes, because responsibility is not a zero-sum game. If the system cannot cope well with sub-optimal roads, it should not be in use wherever one might encounter sub-optimal roads.

GM's Super Cruise does precisely this. It is only available on roads where GM allows it.

Responsibility being a zero sum game is literally what a lot of these conversations and the emerging laws are about. If users are making a mistake often enough to be noticible then the system and its interface are encouraging that behavior. If the autopilot is being used in ill-advised situations then the question becomes why the system wasn't more comprehensible to the user, if it was then they wouldn't have made a bad judgement because they would have understood the limitations of the system. This is just another instance of blaming users for bad design.

A much higher percentage of autopilot cars had fatal collisions at that location than non-autopilot cars. That’s cause for concern. And for all we know the previous driver was distracted. Being not-worse than a distracted human isn’t very confidence-inspiring.

Sounds like the main problems, besides whatever was wrong with the self-driving software, were that area of the freeway being unacceptably unsafe...

> which the NTSB determined had been damaged and repaired more frequently than any other left-exit in Caltrans' District 4, which includes all of the Bay Area.

...and CHP not doing their job:

> The California Highway Patrol responded to the March 12 crash but did not notify Caltrans of the damage as required, the NTSB said.


It was literally the most dangerous left-hand ramp in the entire Bay Area, and had already killed someone within the previous year.

> Huang's 2017 Tesla Model X was traveling at 71 mph when it crashed against the same attenuator, which the NTSB determined had been damaged and repaired more frequently than any other left-exit in Caltrans' District 4, which includes all of the Bay Area.


Tesla should not be allowed to market their cars as self driving or call the product autopilot. It’s a fancy driver assist system that will sometimes kill you if you actually take Tesla marketing bullshit at face value.

This is sad and tragic.

I remember going with a friend on a Tesla test drive. He was eager to try the Autopilot system. And within a few minutes we got freaked out with what seem to us as unpredictable action and we turned it off.

I think the engineer should have never relied on it. Especially if he had concerns with the system.

At the same time, "Autopilot" is misleading and Tesla does own responsibility here. Even if it's purely a marketing term it's designed to up sell you on a half-baked technology.

They should call it "Driver assist" or something like "cruise control plus".

It's a heartbreaking story.


GM calls theirs Super Cruise which is true hands-free.

https://www.youtube.com/watch?v=RxeK0F-D3gg


I like the idea of super cruise -- it seems like a lidar scan of the freeway would have prevented Huang's death. But, obviously the freeway changes over time so I wonder how frequently GM re-scans the freeways. Lidar is expensive of course, but it also seems so obvious that the lidar needs to be onboard the vehicle.

Not sure but apparently they detect road works and are able to update the maps remotely.

Interesting, thanks.

Tesla should put Lidar in their Semi Trucks. Since they'll be constantly driving the whole country, they would have a lot of regularly updated scan data that they could deploy to their lower price point vehicles that only have cameras.


yeah, as long as you're staring straight ahead. It has a camera that watches your face.

Otherwise they're the same. Except Tesla's keeps getting better with each update and GM stays the same until you take it in to the shop...


> GM stays the same until you take it in to the shop...

Over the air software upgrades are supported by GM.

"When the work is done, an over the air software update will turn it back on."

https://www.wired.com/story/cadillac-super-cruise-ct6-lidar-...


GM claims "over 5,000,000 miles of hands free miles driven and counting."

Current estimates of AutoPilot miles driven is ~ 2 billion miles. [1]

That's 400x the number of miles driven under AutoPilot. I think it mainly comes down to number of vehicles. SuperCruise is only availble on CT6 which sells about 1,000 vehicles a quarter, and not all trims even have SuperCruise. The CT5 is supposed to be offered with SuperCruise eventually in 2020, but that just went on-sale without SuperCruise, and sales figures seem to indicate it is selling extremely slowly.

If you look on YouTube it's virtually impossible to find a SuperCruise video of just an owner taking it for a drive offering their thoughts. There are dealer demos, or pro car reviewer demos, but nothing remotely like the thousands of AutoPilot videos.

You'll encounter an AP driving Tesla multiple times a day in major cities. You may never have driven along side SuperCruise. We'll see if Cadillac can deploy the feature more widely. They promised all vehicles in 2020 will offer it as an option but it doesn't look like it's going to happen.


> We'll see if Cadillac can deploy the feature more widely.

"According to GM president Mark Reuss, the feature will be available as an option on 22 vehicles by 2023, with seven vehicles getting the feature next year and 12 more in 2022 and 2023."

https://www.caranddriver.com/news/a30795396/gm-super-cruise-...


Super Cruise is inherently different from AP. It only works on roads that GM has driven and scanned themselves. It’s a lot more cautious and much less ambitious.

On the flip-side I feel like Tesla has done a good job distinguishing Autopilot from their Full Self Driving package.

Autopilot is a mashup of safety features (lane-assist, cruise-control, etc).

We can debate all day how you feel about the name, but that seems petty and you're even wrong about that. The name fits perfectly, look up the definition to autopilot.

Edit: no hero cape here, just a different opinion. I know, crazy! It's almost if you guys root for the automobile/oil industry sometimes you hate Elon so much.

The media is a powerful tool, scary to see it work on HN readers. They got you guys to hate the one guy that actually is fixing the problems. It's not a crazy conspiracy theory if you believe the power the automobile and oil industry have.


Auto: short for automatic

Pilot: a person who operates the flying controls of an aircraft.

Auto. Pilot.


Autopilot is one word and the definition is very strict: https://en.wikipedia.org/wiki/Autopilot

> An autopilot is a system used to control the trajectory of an aircraft, marine craft or spacecraft without constant manual control by a human operator being required. Autopilots do not replace human operators, but instead they assist them in controlling the vehicle.

That's the definition the feature name was based off of. It doesn't matter what you feel. It's a word, get over it.

The source for people who can't use wikipedia: https://www.faa.gov/regulations_policies/handbooks_manuals/a...

Edit: to those of you using "hours" as a value. Autopilot on a Tesla can drive for hours on an interstate with no intervention needed, same as a plane on a flight path. Neither system can replace humans though. It makes sense autopilot in planes is further along, it's been around longer and being in the sky is easier than driving on roads, and it's more regulated for good reasons.


Musk, you and I know what people think auto-pilot on a car does. I really wish these were not on the road.

Pilots engage autopilot and then let the computer do the work. They are not sitting ready at the stick to jump in at the slightest issue in case the system thinks the sun is another plane or some other odd error. My understanding is that the plane flies on it's own for hours. Is that incorrect? Yes for special phases humans can take charge (eg taxiing), but it's clearly demarcated.


While I am not a pilot I have been on the flight deck of a 747. Can confirm that the pilots do not touch anything for hours at a time. IIRC they mostly checked the weather, fuel, engines, altitude and heading infrequently. The aircraft flew itself all the way to the Heathrow pattern. I asked what would happen if they did nothing. They claimed the aircraft would circle above the airport until it ran out of fuel.

The plane flies on its own for hours or at least 747s do.


And it'll fly happily into a mountain or other plane, if unmonitored. Now the problem is streets are a little more crowded.

There are no mountains at the cruise altitude of a 747. TCAS will provide a warning 30 seconds before collision and the Airbus A380 has an autopilot with integrated TCAS avoidance. So it will hit neither mountains (too low) nor other aircraft (automatic avoidance).

https://en.wikipedia.org/wiki/Traffic_collision_avoidance_sy...


It will circle the last entered waypoint and then crash once fuel is depleted. https://en.wikipedia.org/wiki/Helios_Airways_Flight_522

I'm well aware that there are no mountains at 40k feet.

Well airplane autopilots can avoid obstacles... about time after 100 years of development. (The first aircraft autopilot was developed by Sperry Corporation in 1912. The autopilot connected a gyroscopic heading indicator and attitude indicator to hydraulically operated elevators and rudder.)


> Well airplane autopilots can avoid obstacles

Technically speaking yes. But the obstacle does need to have a working TCAS transponder! An A380 won't be able to detect or avoid a military jet for example because they tend to not be too forthcoming about their locations. Some birds, most notably the Ruppell's griffon vulture, can fly into airliner airspace topping out at 37,000 ft.

However, by general agreement, only aircraft on IFR flight plans are allowed into the airliner zone and therefore pretty much all the obstacles (not military jets or birds) do have TCAS or ATC clearance.

Lastly while some aircraft do have TCAS integrated autopilot it is uncommon. Some airlines are still operating aircraft 40 years old.


Words, in general, mean things. "Autopilot" is a word that means things. It does not, most notably mean "at its most generously-described best, full and complete attention the entire time lest it send you into the side of a semi at highway speeds or direct you into a concrete barrier".

Tesla does not need any heroes with capes attempting to minimize their literally-not-figuratively-dangerous marketing. They'll do fine without it. I promise.


"It does not mean "full and complete attention the entire time lest it send you into the side of a semi at highway speeds" or "direct you into a concrete barrier"."

Presumably "Autopilot" is a reference to aircraft, and they do not fly milliseconds away from concrete. (at least not the vast majority of the time)

So if you want to be literal, then the word autopilot shouldn't give you confidence you aren't going to hit things and is not false advertising. The issue is the implicit claim that you can let an autopilot drive your car safely. But it's a fine autopilot!


That is extremely overliteral.

Yeah, so how do you decide where it crosses the line?

You can interpret it as a metaphor, or very literally.

It seems unreasonable to me to interpret it exactly as literally as necessary to make your argument against Tesla the strongest.


Give me a break dude. Tesla’s marketing department knew exactly what they were doing when they called it “auto pilot”. And it wasn’t because somebody there looked up the “literal meaning” of “auto pilot” and rolled with that (though it certainly provides an easy out for their more... dedicated... fans to help defend the company)

I'm not a fan, owner, or shareholder of Tesla. I've repeatedly stated on HN that I think PHEVs are the future and not full electric cars.

What you think of Tesla has nothing to do with whether 2+2=4. If you don't want to discuss the same thing as another person, then go do something else.


Frankly, this is disingenuity to the point of mendaciousness. Tesla knew what they were doing. Well-actuallying does not and will not change that.

I'm not endorsing Tesla or their tech, and I stand behind what I wrote as demonstrating the alternative to taking "Autopilot" as a metaphor - taking it literally. If you don't like either one, what's your justification for interpreting Autopilot in a particular way?

I think the important questions are how people actually interpret it, and (if we want to judge Tesla) how they could reasonably have expected people to interpret it.

I don't know exactly how they presented it to the public, but it definitely doesn't seem like a name you'd choose if you were keen to avoid having people overestimate its capabilities.


I think it's insane to identify the problem as being the name, rather than that using the technology inherently leads to a dangerous sense of confidence (assuming it doesn't in fact scare you out of continuing). As many people have said, the fundamental issue is the tiny amount of time between appropriate operation and disaster, and the better it works at first, the less prepared a person is when it goes haywire. This is the same no matter what you call it.

Yeah I agree that that is a bigger problem. (And not just confidence, but concentration -- even if you know you can't trust the car, it must be a lot harder to stay properly alert when you have nothing to do 99% of the time.) I didn't mean to agree with anyone who thinks that the name is the problem, but I think it could plausibly make things worse by contributing to unrealistic expectations and gung-ho attitudes. It could also give us a clue about Tesla's attitude, and how they are balancing hyping the feature vs. trying to prevent driver complacency.

It is completely dishonest to suggest they called it auto pilot for any reason other than to invoke the image of the car doing every thing for you. It’s funny how people love to slam other companies marketing as being dishonest “brainwashing” but give Tesla a free pass.

Instead of trying to tear Tesla down for something you are doing yourself, why not look for the positive aspects of what is happening? The technology is getting hugely better over time, so whatever you have been reading about it is probably outdated.

Tearing down a faceless company! Heaven forfend. We're all just "haters" here, I guess.

Never mind that they killed somebody, but sure.


"they killed somebody"... ok, we're done here.

Autopilot never meant full automatic control. In a plane context, which was the most common until now, it's only used to keep a trajectory steady, and anything unusual has to be taken care of by a human pilot.

https://en.wikipedia.org/wiki/Autopilot

> Autopilots do not replace human operators, but instead they assist them in controlling the vehicle.

Names don't really mean anything, it could have been named anything. If people think it's 99% reliable and it's convenient, they will use it.


> Autopilot never meant full automatic control. In a plane context, which was the most common until now, it's only used to keep a trajectory steady, and anything unusual has to be taken care of by a human pilot.

From what I've seen, the general public has an inaccurate perspective on airplane autopilot systems. Loads of people think pilots simply push a button then sit back and relax as the plane automatically taxis, takes off, flies then lands itself.

In choosing to name their technology after another technology which the general public has misconceptions about, Tesla chose to inherit those misconceptions.

Edit: I'm surprised there is any incredulity here, I've been hearing people say "planes basically fly themselves" and "all it takes to be a pilot is pushing a button" for years. Most often this misconception is surfaced in casual conversation, but here is one example of the misconception getting published: 'A computerized brain known as the autopilot can fly a 787 jet unaided, but irrationally we place human pilots in the cockpit to babysit the autopilot "just in case."' - https://www.wired.com/2012/12/ff-robots-will-take-our-jobs/

Edit 2: Re: Autoland

Modern airliners have autoland, but always take off under human control. Autonomous takeoff is not used by any airliner.

Anyway, to your point, a modern airline's autopilot system allows pilots to take their hands off the controls. Tesla's manual says that is forbidden: "Warning: Autosteer is a hands-on feature. You must keep your hands on the steering wheel at all times." (page 106: https://www.tesla.com/sites/default/files/model_x_owners_man...)

So no matter which way you slice it, Tesla has chosen misleading terminology.


> From what I've seen

Where? How? That's not a scientific statement.

> Loads of people think pilots simply push a button then sit back and relax as the plane automatically taxis, takes off, flies then lands itself.

Who thinks that? That seems like a hyperbole and you can't possibly know.

> Tesla chose to inherit those misconceptions.

Assuming all those above assumptions are true and assuming Tesla reached those same conclusions as the people that didn't know how airplanes work.

Just loads of speculating and attempted mind-reading...

> Autopilot -> Automatic Pilot

Autopilot is one word and has a specific definition in avionics. It keeps the altitude and heading.

If you rip the word apart and magically transform Auto to Automatic you have Automatic and Pilot, but there is no such thing as an automatic pilots, just pilots, that's why autopilot is one word.

Under that same logic automobiles should be automatic mobility devices that self-drive already.

You can't change the definition of a word to meet your argument. You also can't speculate that people probably might think the word may mean this.

Autopilot !== automatic pilot. That's not a thing.


I wouldn't normally use it as a source, but Urban Dictionary shows that colloquially plenty of people consider autopilot to be complete automatic control (worryingly, even to the point of blackout drunkenness).

https://www.urbandictionary.com/define.php?term=autopilot


That's referring to the human condition of "being on autopilot" (aka. your higher cognitive functions aren't doing much and you're brain's operating primarily on reflex). It's generally only used to describe situations where you've done something dumb because you were impaired or weren't really paying attention.

Just expand and define the words, and don't let your knowledge about how the tech works block your ability to reason.

Autopilot -> Automatic Pilot. Automatic: working by itself with little or no direct human control. Pilot: one qualified to control a vehicle. I'd argue if autopilot requires intervention it is hardly automatic and definitely not qualified to operate a vehicle.

Further, consider other automatic devices in most cars, such as the automatic transmission. How do people expect to intervene with a gear shift during normal operation? Never.


Well, I mean, my auto-mobile requires me in order to be mobile so maybe words are more than the sum of their parts.

It's like how 'act' means one thing, 'god' means one thing, but in the right context 'act of god' doesn't mean a supernatural being intervened.


Are you being intentionally caustic? The phrase "running on autopilot" is an established colloquialism for doing something without being mindful of it, and almost every major dictionary definition of "autopilot" states that it is a system used to navigate something without a human.

Demanding a rigorous scientific study on human understanding of a word is simply not necessary when there are multiple recorded videos of people asleep at the wheel with a fucking banana on a string dangling from the steering wheel to fool the system.


So do you have a problem with the name or a problem with people intentionally being stupid?

People trying to fool the system is not Tesla's fault and indicates they are aware of the warnings alerting them they need to pay attention.

If people are dangling bananas they're doing it for attention and you guys are feeding it and blaming the manufacturer.

Changing the product name will not change anything.

Humans will always be a piece of shit sometimes.


"the general public has an inaccurate perspective on airplane autopilot systems. Loads of people think pilots simply push a button then sit back and relax as the plane automatically taxis, takes off, flies then lands itself."

Is this just being pedantic? If you say the autopilot doesn't land the plane, maybe a pilot would agree for all I know, but "autoland" seems to be a thing, right? When I read about "autoland" systems, I also notice they use the phrase "autopilot-controlled landing". I'm not convinced this is about inaccuracy so much as imprecision.

https://en.wikipedia.org/wiki/Autoland


No, this is not pedantic. An autopilot is a complex system that needs full time management. It looks like this [1], and you need to correctly twiddle the dials every few minutes to get the plane to do what you want. It's not a button, it doesn't avoid traffic, and it doesn't know about the broader flight plan. Even fancy autopilots that can follow waypoints and intercept a localizer and autoland and all that - managing them looks more like programming a crappy computer than pressing a button.

[1] https://en.m.wikipedia.org/wiki/Autopilot#/media/File%3AFMS_...


Which means that both pilots 1.set the parameters (not a single button push), 2.follow closely that the plane does what they would have, 3.ready to take over at any moment, as the AP cannot even reliably detect it's not on the happy path; this does take off some of the cognitive load off the pilots, but it's very far from the narrative "push button and have a coffee" that's being peddled here.

In fact, it's more of "hold this heading and glideslope" than anything else - a smarter cruise control, if you will.


"In fact, it's more of "hold this heading and glideslope" than anything else - a smarter cruise control, if you will."

Sounds like Autopilot on Teslas is the same thing as an autopilot on an airplane, so why is everyone angry at them again? Somehow they're responsible for the majority of the public having misconceptions about both?


Somebody is promoting and selling "full self-driving", with fine print specifying "in some indeterminate future". Perhaps that could be...nah, not related at all.

The first automated landing on a commercial flight took place before microprocessors were invented, to give you an idea of complexity. To this day, autoland means that the aircraft follows a radio beam down to runway, pitching slightly up and reducing thrust to idle when radar altimeter reading falls under 50 ft. It's really primitive tech, essentially a line following robot.

Yes, the public holds misconceptions about how much the autopilot on a plane does. But some of their beliefs are accurate, and would be dangerous when extended to a car: like how, when the plane AP is engaged, you don’t need sub-second reaction times for possible obstacles. You do need that in a Tesla.

Fairly sure that you do need that for autoland, to engage TO/GA (for go-around) and take control. Most other cases, AP disengagement indeed means "now we think about what next".

No. Required reaction times would be around 2-3 s. Even if you took 10s, you're not going to die -- you might hit the edge lights though

At which point you're arguing semantics: "it's not the short landing that kills you, it's the impact into whatever you hit there!" You're describing Asiana Airlines 214, whose crash killed 3.

https://en.m.wikipedia.org/wiki/Asiana_Airlines_Flight_214


No I'm saying the pilot flying an aircraft correctly configured for autoland in which the autoland system encounters a fault does not require subsecond reaction times to avoid a crash.

Asiana Airlines Flight 214 is irrelevant because (1) it was not correctly configured for approach, (2) the autopilot was switched off over a minute before the crash, and (3) the autopilot did not fail.

The parent of this thread was an argument over whether a aircraft flying on autopilot needs the pilot monitoring it to have subsecond reaction times to avoid a crash. It doesn't.


> Names don't really mean anything, it could have been named anything.

Names absolutely have meaning. Why do you think Tesla named it “auto pilot” and not something else?


Maybe because it literally means what it does?

No, not really. If you're comparing to planes, they have automated systems that will almost always keep them at a completely different altitude to avoid potential collisions.

Plane pilots have to pay very little attention with autopilot on compared to Tesla drivers with Autopilot on.


> No, not really. If you're comparing to planes, they have automated systems that will almost always keep them at a completely different altitude to avoid potential collisions.

Autopilot in VNAV/LNAV modes does not do this, it follows a programmed flight path.

TCAS is the anti-collision system, but it's a secondary system that only gives information to the pilot. It's up to the pilot to make inputs to avoid a collision.

Pilots are the primary anti-collision system - by using their eyeballs to look for other aircraft when flying under visual flight rules (VFR) and also following clearances/information from controllers when flying under instrument flight rules (IFR).

> Plane pilots have to pay very little attention with autopilot on compared to Tesla drivers with Autopilot on.

This is not correct at all.


There are different autopilots for planes. They differ in capabilities. Mostly the pilot sets a heading, altitude and speed and the airplane will maintain that. The main collision avoidance system is Air Traffic Control telling the pilot where to fly when. Then there are other warning systems that only alert the pilot (terrain and other planes).

The real autopilot in planes is called Auto Flight System and is very recent and needs to be monitored closely. It can lift off, fly and land on it's own, but basically only in good weather condition.


“Suicide Mode” would be as successful, I’m sure.

How about "Californian Roulette"?

Every single time this is posted, I have to remind people that the video posted on Tesla's own Autopilot website [0] says this RIGHT at the beginning:

"The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself"

They are not saying "use with caution, autopilot is an assistance feature", they are saying "The car is driving itself".

[0] https://www.tesla.com/autopilot


Every time you do, I remind you that the page also makes it clear that current systems require supervision.

I have a car with Mobileeye. Mobileye recently demonstrated fsd in Jerusalem. Fortunately, like everyone else who can read, I know that system is not the same system that I have installed on my car.

Tesla’s wording really isn’t that different here.


Did the car salesman showed you video of that FSD demonstration, while they were describing driving assist features of a car you were buying?

No, but they used to run this ad: https://www.youtube.com/watch?v=UTdR91G2L7c

Several interesting things are here: 1. Construction zones -- ProPilot should not be used there. It isn't realiable when lines are incorrect. 2. The wheel is moved automatically to a large degree. In the video it appears to be a larger movement than the system actually supports. 3. This commercial implies that it is actually a solution for inexperienced drivers, but it definitely is not.


I don't know when that went up there, but that's not demonstrating autopilot - that's demonstrating Full-Self-Driving (FSD) capabilities.

Less than 10 seconds in it's demonstrating functionality that Autopilot does not have - turning from one road onto another, stopping at intersections, etc.


Internet Archive suggests it's been up since late 2016, with the video, and only minor changes to copy since then.

https://web.archive.org/web/20161213042716/https://www.tesla...

> Less than 10 seconds in it's demonstrating functionality that Autopilot does not have

It's on a page called "Autopilot", as the hero video. It's not very clear that this is functionality that Autopilot does not have, and it helps add to the confusion about what Autopilot is and what it is not, something seem keen to take advantage of.

They have had 4 years to possibly push that video down the page, perhaps add a title or a disclaimer saying that this is not Autopilot, perhaps clear up the confusion. Whether it's intentionally like this, or just plain old incompetence, either way it's not a great look for Tesla.


> Current Autopilot features require active driver supervision and do not make the vehicle autonomous.

You're conflating FSD with Autopilot.


Since autopilots on airplanes literally fly straight lines between waypoints, it's pretty clear that whatever it means in airplane contexts has zero bearing on what it means in a car context. Having an airplane autopilot fly in proximity to anything else is a recipe for swift death.

> Names don't really mean anything

This is so, so wrong. Names explicitly set people's expectations.


"And within a few minutes we got freaked out with what seem to us as unpredictable action and we turned it off."

On the one hand, I'm affected by the FUD and will not buy a Tesla, and perhaps also will have schadenfreude to the extent their CEO has misfortunes.

On the other hand, to be fair, other driver assistance features are creating problems too. I think it was Subaru I was just reading about having serious problems (and a recall) where automatic braking triggers inappropriately and may cause an accident.

People have incredible faith in software and technology, and that's fine and all, but I wonder what people like that hang around HN for.


I have a Subaru with some sort of "Eye Sight" feature. It is quite handy in maintaining a constant distance from a car in front of you in moderate traffic - but you have to always be ready for someone to cut you off. It tracks the lane markers and rumbles the steering wheel. If a car stops short ahead of you, it beeps and flashes an indicator - I'm usually on the ball enough to hit the brakes myself though, so I don't know if its automatically braking or not.

That being said, there is a place on my commute where going around a curve in one particular spot (southbound highway 17 in Santa Cruz, before the Glenwood Cutoff if anyone is playing at home) where 9 times out of 10 the car thiunks thinks I'm about to drive into a wall, sounds the alarm, and then regains its senses.

So I haven't had it fail in a harmful way but I also don't think I'd ever give it enough rope to hang me with.


I also own a Subaru. The assistance is very unwelcome for the crash detection at times. Living in Minnesota the exhaust from a car in front of you can set it off.

This can happen on start off of a green light and can completely decelerate the Subaru. The eyesight detects the exhaust as obstacle the car behind you is now at a loss why you are now immediately breaking off the green light. I’m not a fan at all and eye sight.

Additionally, you have to turn off the backup sensor to leave my drive way or it will slam on the brakes as it detects flowers that may be leaning into the drive way.

Car smarts don’t seem all that smart. I get a kid could be saved here, but driving a car crying wolf all the time leaves a consumer very frustrated. Dealership has been zero help in both regards.


I don't like the idea of automatic braking, but arguably plenty of people had irrational fears about seat belts and air bags (although the fears about air bags seem to be coming back)...so I will seriously consider the new safety features once my car insurance offers a significant discount for them.

Good ones only engage when you don’t act fast enough. They’ll also press the brake pedal harder if it thinks you aren’t pressing hard enough fast enough.

Ours only engages when it is truly justified. It has yet to flip its shit incorrectly. It’s adaptive cruise control on the other hand will sometimes freak out about cars in adjacent lanes and slow down more than it should.


Our Volvo V40 saved our butt once on the highway. Late at night, cars piling up all of a sudden in front of us. The car slammed the brakes and stopped us a few meters away from the car in front :-)

I also have a V40 and have also had an experience exactly like that on a motorway - bright red lights flashed at the bottom of the windscreen along to a warning sound and the autonomous braking kicked in and stopped me from hitting the car in front. Excellent system.

When I was a kid - this was quite a few years ago - I had some older relatives who refused to wear a seat belt. Being somewhat of a brat, I asked them why. They answered like this:

"If I get in a wreck, I don't want to be trapped in the car. I want to be thrown clear!"


This might be just me, but I don't think it's bratty to question truly idiotic behavior.

I think it's obnoxious to call people "truly idiotic" if you would not apply the phrase to everyone who takes "unnecessary" risks. I am conditioned from childhood to wear a seatbelt in a car, but I also have owned a motorcycle in the past, and currently own a pre-airbag car (which I assume also lacks more subtle features like modern crumple zones and high strength steel, etc.). What makes one risk reasonable and another not?

The Subaru doesn’t have a forward radar does it? I heard it uses two optical camera instead. I bet smoke/water vapor can trip that up...

Our Mazda gets tripped up when adaptive cruise is engaged and someone pulls into the turn lane to make a left. It also can sometimes get “spooked” on the interstate when going on turns and trucks are in adjacent lanes.

Hasn’t freaked out with collision warnings though. Every time it has it was totally justified.


My Mazda has the same issue with cars in front taking exits. I’ve also had it brake unexpectedly on the highway on Rt 95 in RI. I was coming over the crest of a small hill and in front was a stone overpass. The angle of the radar was still elevated so it thought that I was headed into a stone wall. Except the road dips down well before the overpass so in reality that isn’t a possibility.

The warning lights and alarm went off and then the car slammed on the brakes. Thankfully no one was behind me.


Subaru EyeSight uses two stereoscopic video cameras only: no radar, lidar, or ultrasonics. It was engineered to achieve the maximum possible safety improvement at minimum cost. And it basically succeeded at that, but it's very limited. Can only sense straight ahead, doesn't work at all in fog or heavy precipitation, etc.

What year is your Subaru? My understanding was that early versions of the eyesight system were pretty bad, but they fixed a lot of the kinks in recent versions. It would be disappointing to learn that wasn’t the case...

2018 Outback. Which is fairly new’ish.

My Acura’s collision alarm always beeps before coming out of the I90 bridge tunnel on the way to Seattle. It’s not like I’m relying on auto breaking to break or anything, and it’s never actually done anything beyond warn spuriously.

I like Eyesight. It's pretty cool and I like the emergency braking feature. The warning triggers before the emergency brake quite often spuriously but it's nice. I like it.

I think the difference between me and the rest of these guys is that I see it as just an assistive technology and they're trying not to drive. I still have my eyes on the road and everything. It's just there to help, just like I still use my mirrors even when I have the blind-spot detector active. And it does a good job, especially since I can override the adaptive cruise and lane assist at any time pretty easily.


I own a 2019 Forrester with EyeSight. While I dont hate it, and I like the adaptive cruise, I'm curious..

How exactly do you easily override the crash detection logic? I've had it slam on the brakes spuriously due to steam from manhole vents, and short of disabling the feature (which cant be done in a way that sticks between starts) I dont know how I would avoid this behavior, particularly easily.


Oh sorry, I meant that part to refer to the lane detection and adaptive cruise braking. Let me clarify.

I haven't had a fake brake yet and I've driven over quite a bit of steam myself so I guess I got lucky.


Read the manual. You should be able to explicitly turn these off for car wash, etc. They’ll be on again the next time you start.

Subaru EyeSight will do automatic braking under limited circumstances. It only works to reduce the risk or severity of forward collisions when the cameras can see clearly. Overall it's better than nothing and worth the minor extra cost.

I refuse to use Toyota's for the same reason. On I-95 in northern Virginia, people drive slow in the middle and left lanes all the time, so people are always weaving in and out of lanes trying to get around. This means you get cut off a lot. And every time someone cut in front, the truck would slam on the brakes well before it was necessary.

I've driven Toyota's all my life and never once been in a vehicle that does anything automatic beyond turning on the lights when it gets dark.

Maybe you just need to avoid the Toyota model you bought.


Every single new Toyota from 2017 on has adaptive cruise control and emergency braking assistance. You can disable the EBA and use reg cruise control but it’s all on by default.

> other driver assistance features are creating problems too. I think it was Subaru I was just reading about having serious problems (and a recall) where automatic braking triggers inappropriately and may cause an accident.

I had a late model C-class Mercedes for a bit and got rid of it for that exact reason. It auto braked twice in conditions that did not warrant it at all (and hard too), the first time causing the car to come very close to skidding into the support posts of a cantilevered bridge, the second time because of a traffic sign in a turn that apparently gave enough of a return to trigger e-braking. Very bad software. According to the company that sold me the car everything was as it should be.


I remember an engineering class at uni in the early 90's discussing how cruise control worked, and was borderline horrified at the childish simplicity of the design. In all this time, I've probably used it 1/2 dozen times, because my trust level is very low.

Just the other day I was a passenger in a Peugot and it was raining very heavily. Cruise control automatically disabled when one of the tyres suddenly bogged down due to heavy water on the road causing the car to lurch.

Until these systems look at and analyse the road and traffic like we do, I see little hope in an automated system that wasn't deadly by design. For me the number 1 litmus test is to read the slope of the road: if about to go uphill- more gas required, about to hit a crest- back off and maybe allow engine braking.


Sorry but using cruise control in heavy rain is just simply a bad idea. It's supposed to be used when conditions are ideal, not when they as bad as can be. That is human error, in my opinion.

In my driver’s education course back in the early 2000s, they explicitly told us that Cruise Control can dangerously malfunction in rain and should only be used on dry roads.

How is it any more idiotic than maintaining that speed manually?

I'm struggling to think of any combination of hardware/software that isn't asinine on first glance (e.g. feed the cruise control off only the rear wheel speed sensors on a FWD vehicle) where cruise control would be any different than a driver manually holding the same speed by putting their foot on the gas.


I don’t think a normal human would try to maintain an unsafe speed going uphill in heavy rain.

It's been almost 30 years, you might want to revisit that decision.

Sometimes simple designs are good. Older cruise control designs are quite primitive but very effective. Nothing but a simple feedback loop. Car not fast enough? More gas. Too fast? Less gas. The smarter ones (probably ones with electronic transmissions) would even downshift to keep under the preset speed.

I bet the earliest ones were all mechanical too.

Of all the features in a vehicle, I don’t think you need to worry about “standard” cruise control.


You kind of validate the other poster's point that it should be called "driving assistance" like the other half-baked solutions and not "autopilot".

There is no current recall for Subaru EyeSight. Perhaps you're thinking of a different manufacturer?

Subaru did have a recall several years ago due to an electrical problem which could prevent automatic braking (not trigger inappropriately).

https://www.consumerreports.org/cro/news/2015/06/subaru-reca...


I've experienced this feature. Many fewer incidents of false activation ("phantom braking") in more recent revisions of the software, which gets better and better. It is a feature, because it significantly adds to safety in cases where it's needed, and has prevented at least one serious incident for me. It acts within certain parameters that are not as extreme as described. To call it "slamming" on the brakes is overdramatizing it somewhat.

I think self driving cars are a ridiculous solution to a problem we created and doesn't need to exist. The US building out the country for cars wasn't a good way to do transportation. If we just had trains going from city to city and subways in the cities, we wouldn't even need self driving cars.

Not everyone wants to live in a dense population.

Call it speculation, but most people don’t really want all the consequences that come with the distinct lack of it in the US. Especially when you consider the economic effects of walkable areas.

Most people don't really want all the consequences that come with the distinct lack of rural areas either..

But if you have denser cities, that leaves more room for rural areas. I don’t think anyone is arguing for less rural land, but as it stands, it is legally impossible to build another city like San Francisco or Manhattan in the US, and that’s a problem for sustainability. People want to live in those places, and not just because they are old.

It's legally impossible to even build San Francisco into a place like Manhattan. Most the city is only zoned for two stories.

LA is zoned for a lower population today than it was half a century ago, and in that time the population has increased by 50%. Enter housing crisis.

The problem is not that you can't legally build a city like SF in the middle of nowhere. There are hundreds of smaller cities that could grow into bigger ones, but their population growth just isn't there. People wouldn't come to your "New SF" either.

The real problem is:

- Many people choose to live in big hubs like SF or Manhattan because that's where the high paying jobs are

- You can't legally turn SF into Manhattan (i.e. making it denser)


While it is true that Americans rely a bit too much on their cars, and have built infrastructure based on it instead of public transportation, it's hard to envision a world where individual transportation is no longer needed.

Look at Japan, they have a great train network but still use cars a lot. Because while train works well in cities, you still need people living in the country side to grow your food.


There a massive gap between “all cars should be banned”, which you seem to be replying to, and “some urban areas should be allowed to densify and operate without cars if their citizens want that”.

That's true but it seems like the people advocating for policy on both ends don't understand the difference in density and therefore transportation needs between Boston and Boise. Everyone wants policy to be a state or federal level cudgel these days and quite frankly that is stupid.

I live in rural Japan and own a car. However, I lived here without one for over 5 years. My apartment building is surrounded on 3 sides by rice fields, so it gives you an idea of just how rural it is. I can walk from one end of the town where I live to the other in 20 minutes. In that space there are 3 grocery stores, 2 hardware stores, a butcher, 2 fish mongers, 1 tofu shop, 2 flower shops at least 5 barbers, 3 doctors, 2 optometrists, many bars, restaurants, cafes, etc, etc, etc. I moved from suburban Canada. In 20 minutes I wouldn't be able to walk out of my neighbourhood of cul-de-sacs lined with identical houses. Not even a single convenience store. It's a commercial waste land.

In rural Japan, cars are used. In suburban Canada, cars are necessary. There are definitely more rural areas in Japan than where I live. I live in an actual town. If you are up in the mountains, or live far away from a town, a car is probably necessary. However, for the vast majority of people who live in this country, it is not.


This is true. But what I wish for is better public transport in countryside. Automated train that can run also late at night and more bus would have relieved most of my woes when I was living in the countryside.

You could even make it so that you drive it yourself and keep it outside your home to use when you need.

What an innovation that would be


Would be great if you could afford the $40k and insurance to house that 4000lb hunk of precious metal you spend energy moving around everywhere you go. A bus pass is a $1.75 on the other hand...

263 million cars in an America.

Don't be silly, not everyone is buying $40k cars. And the bus is a great solution if it exists, but in most places on planet earth it does not.

The bus pass wouldn't be $1.75 if it serviced the rural areas. The economics just don't work with low population density.

Look at NJ Transit, in rural areas if you have a train station you get 2-3 commuter trains in the morning on week days and no service on the weekend or during the day/late at night. And thats for NJ, which has high population density even in rural areas, is physically small and is part of the north east corridor.


Yeah, very clever. There is many reason why one would like to avoid having to take a care all the time:

1. I cannot drive if I went to a friend a drunk alcohol or consume weed

2. Why would I drive if somebody/something could do it for me, and i get to do whatever i want during this time

3. Gazoline, insurance, maintenance, repair, ... a car can be very costly especially if use intensively

4. Cars are polluting a lot, maybe I don't want to contribute to the issue of global warming


And yet there are almost as many cars as adults in the USA.

How do you think all those issues are solved presently?


True, but why not build out transit for existing dense populations which are congested?

If you visit Europe you can see this system working with a much less dense population and connections for small towns.

This is true, but there’s no rolling back history. We’re quite locked in to our current situation and the best we can do is work within the constraints that have been established.

I sometimes wonder, from a science fiction point of view, how cities would be designed if we somehow had foresight of the consequences (social, environmental, etc.) of building infrastructure centered around the usage of cars for daily transportation. We don't really hold much resentment for how things started since people just didn't know about the problems that over-reliance on cars would cause decades down the line. Would people still consider the convenience/economic opportunity the automobile afforded to outweigh the problems?

(Also, there is a body of fiction set in a future Earth where, since people several decades ago did have knowledge of the problem of climate change, they end up being collectively resented by their descendants for their inaction/ignorance in addressing it.)


Or set the clock right and invest in road diets and protected bus lanes. These are solutions that any city can implement but doesn't, because they aren't politically agreeable to local electorates. Nearly all american metros were laid out around the street car, anyway.

Those pesky local electorates. Always stopping progress.

Maybe, just maybe, the majority of people who live in those cities like the way they're arranged?


A small minority biased towards homeowners who stand to benefit economically from constrained supply and rising costs are the ones voting, not the majority of people who live in these cities.

it's becoming more and more clear. self driving technology will only put more 2000 lb machines on highways clogging up space. USA NEEDS TRAINS!

Not necessarily: a self driving car takes you to a self driving coach for the long distance, then back to sdc for the last miles. All electric. Very efficient.

Not as efficient as a freaking train with hundreds of people transporting basically only what is needed.

Trails will help highway traffic between major cities, but it doesn't address how spread out suburbs are from people's jobs and support infra (grocery stores, doctors, restaurants, etc). It's a chicken-and-egg problem, now:

- people are spread out because they've owned cars, so the distance doesn't bother them

- If you take their cars away, they're too spread out to support themselves, because of how the towns were built.


Totally. American cities we're built with cars in mind, and it's really crippled their potential. Densely packed cities with subways and a lot of vertical freedom are the best way to build. They prevent deforestation because you don't need to expand outward, and they help with pollution because you can have maybe a couple hundred metro cars instead of millions of individual cars. I'm not sure how this would be implemented now though, because like I said, American cities have been crippled by their car infrastructure.

The 'problem' doesn't even exist; non-autonomous cars work fine in America. Autonomy is a luxury product.

Aside from those massive numbers of dead pedestrians. (Edit: not that I think autonomous cars is the answer - but it's clear that regular cars are a problem)

I hear you about pedestrian deaths, but the numbers aren't actually that bad (accepting some level of accidental deaths will occur no matter what we do).

Cyclists coming in at 857 in 2019 and people on foot coming in at 6,283[1].

That's not too trivialize those deaths... but from what we've seen so far, I'm not convinced self-driving cars are going to do any better.

[1] https://www.usnews.com/news/us/articles/2019-10-22/pedestria...


lol, there is a serious car accident in America every 3 seconds. People die every day in car accidents. Is that what you consider "working fine"?

> 'Is that what you consider "working fine"?'

First off, yes. It's a big country so figures like that don't mean a whole lot. I wager I'm more likely to be killed by cheeseburgers than by a car. 50,000 cheeseburgers are eaten every three seconds in America.

But another thing: if Tesla were really trying to save lives, they'd have this software turned on for all their cars. That's not what they're doing. Rather they're selling activation of this software for thousands of dollars. They're selling a luxury product.



Owning your own car seems lime the much bigger waste in the long run. Imagine only needing a fraction of vehicles in total.

Owning your own car will become a luxury for many people. Kinda like owning a horse.

I‘d still rent one every once in a while for road trips and camping though.


I never owned a car, but going anywhere, anytime, at will is a very real thing and cars are the obvious solution. Humans are terrible at safe driving and self driving will eventually solve this. I don't see how this is not the future of transportation.

The obvious solution often isn't the right solution.

We're going to lose the global warming battle if we don't reduce personal cars in transportation. (And no, EVs don't solve it, they still have way too big CO2 footprints)

> personal cars in transportation.

the EPA[1] says 4.6 metric tons of co2 from an average car. There were[2] 1bn cars in service globally, as of 2010.

“It has been estimated that just one of these container ships, the length of around six football pitches, can produce the same amount of pollution as 50 million cars."[3]

1bn/50m = 20 cargo ships equal the estimated world carbon footprint of automobiles.

There are 50,000 cargo ships globally. There's your problem, go nuclear or go home.

[1] https://www.epa.gov/greenvehicles/greenhouse-gas-emissions-t... [2] https://www.worldometers.info/cars/ [3] https://inews.co.uk/news/long-reads/cargo-container-shipping...


Cargo ships looks like a diversion from lobbies. It should be relatively easy to solve (upgrading 50k motors is much easier than redesign transport on land to trains, subway and bus). The trick is to conflate nitrogen oxide emission with air pollution.

When I look at data I find that transport is "20% of CO2 emission" (https://ourworldindata.org/co2-and-other-greenhouse-gas-emis...) but shipping is "2.2% of the global human-made emissions in 2012" (form wikipedia http://www.imo.org/en/OurWork/Environment/PollutionPreventio...)

What does it means ? Well, shipping uses dirty fuel and is very polluting but that's also a scapegoat for individual consumerist culpability.


Have a look at figure 8.1 here:

https://www.ipcc.ch/site/assets/uploads/2018/02/ipcc_wg3_ar5...

Road transport is growinv catastrophically in addition to dominating other transport in absolute terms

I have difficulty understanding how you can present a figure like 4.6 tons per car and argue that it's not a lot, especially as developign countries are headed dangerously toward widespread car ownership.

The "50000x" number about cargo ships isn't CO2 emissions, it's about other pollution due to cargo ships using bunker fuel.


We can go anywhere, anytime, at will after the roads have been built and the pedestrians cleared away, no? The occasional neighborhood also cleared away for a freeway.

All that money and energy could have gone somewhere else.


> Humans are terrible at safe driving

I'd dispute this. I'd say humans are excellent at many aspects of it. The high mortality is simply because driving is inherently dangerous and we do a lot of it.

I imagine the solution is basically what has happened over the last 60 years: gradual changes to improve safe driving.


"Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day.

An additional 20-50 million are injured or disabled."

https://www.asirt.org/safe-travel/road-safety-facts/


"Over 90% of all road fatalities occur in low and middle-income countries, which have less than half of the world’s vehicles." Not really relevant statistical baseline here. Self driving cars aren't going to help in areas where people can't afford them.

Not in the near future. Think 100+ years. I can totally imagine owning a human driven car to be the luxury choice.

Seems like we're proportionally better at the seemingly more complex task of driving, than we are at maintaining our health. Heart disease in the U.S alone apparently kills about as much as half the number of worldwide car related fatalities each year.

anywhere there's a road, and not forest, park, building, garden, trains, or people...

I think it's pretty absurd that we don't have automated trains and light rails but somehow we think self driving cars are in the near future. Compared to cars on roads it should be trivial to automate trains on tracks. Why don't we go for the lower hanging fruit first?

> I think it's pretty absurd that we don't have automated trains and light rails

We do have them: https://en.wikipedia.org/wiki/List_of_automated_urban_metro_...


"Autopilot" is not misleading. You just don't know what autopilot means.

Autopilot on planes does not mean the plane is self-flying and the pilot doesn't have to pay attention anymore to take over at a moment's notice. Why would you think it's suddenly different in a car?


People in general think this is what autopilot means since that is how it is portrayed in movies and books.

What movies are you watching where they turn on autopilot and the plane lands itself??

If anything, movies portray planes as extremely hard to fly, and the heroic pilot always has to take over from the autopilot.


My airplane can land itself. I always assumed that was normal.

In the movie “Airplane” the autopilot is an inflatable humanlike doll full on manning the plane. :-)

Regardless of what autopilot means to pilots, it means something else to drivers.

Why? Autopilot is a term borrowed directly from aviation.

Most drivers aren't pilots. They don't know what autopilot actually does. They expect it to drive the car for them, and they probably think autopilot does the same on planes. It was a mistake for Tesla to borrow the term.

You're purely speculating on people's knowledge:

> They don't know

> They expect

> they probably think

The alternative is some people are terrible and there will always be people that abuse nice things on purpose.

They know what they are doing, that's why they are trying to trick the warning systems.

These are the same people on the phones and merging without looking in normal cars.

All of that is speculation, but imo it's more likely than yours.


There's no need to speculate.

IIHS did a survey of 2000 people and a driver assistance system named "Autopilot" received the most responses that overestimated the system's capabilities. For example, "Nearly half — 48% — thought it would be safe to take hands off the steering wheel when using the system." In general, about 50% more people overestimated the capabilities of Autopilot compared to the names used by industry leaders (ex. SuperCruise, ProPilot Assist). Other things users were more likely to think they could do with "AutoPilot" included texting and watching a video.

See https://www.iihs.org/news/detail/new-studies-highlight-drive...


Only 6% thought taking a nap was okay though... 16% thought texting was okay.

So clearly not the majority in all aspects and the other names had similar results for those questions.


Because meanings of words can change depending on context.

Speak for yourself. In general, "Autopilot" is misleading. There's no need to just guess about how people interpret it, we know how they do. See https://www.iihs.org/news/detail/new-studies-highlight-drive...

> They should call it "Driver assist" or something like "cruise control plus".

Exactly, similar tech has been named Adaptive Cruise Control since the 90s.


The pilot's name is: marketing. Tesla is well known for rushing engineering and cultivate market cap. Seeing a pattern? Boeing, Uber, ...

beside the death, the worst part is that Tesla/Musk were all about sound science bringing benefits to users. Now it's becoming a stats game over PR. And I'm not even sure anybody bought a Tesla for AP. It was like a clear coat class option.

You think "autopilot" is bad? Do you realize Tesla has a DLC mode called "full self driving"?

This is the ethics of Tesla.

They are marketing primarily, function second.

My biggest concern is that since the company isn't profitable(?) That these highly risky things are acceptable. Other profitable auto companies cannot be this wreckless.

It's not like they can go after Personal assets.


breaking news from 2018? there’s nothing new here, including the deceased driver’s alleged complaints.

Short sellers on HN !

The news is that the NTSB released new information. Unfortunately their site is now down.

I'm not the person you replied to, but is this new info? I could have sworn I read all this last year...

Yeah, and this article from 2018 has even more information, because it says Walter complained to the dealer, not just to his family as this new article says.

>Walter Huang's family tells Dan Noyes he took his Tesla to the dealer, complaining that -- on multiple occasions -- the Autopilot veered toward that same barrier -- the one his Model X hit on Friday when he died.

https://abc7news.com/3275600/

I guess maybe there's a small chance that the old article was wrong, and he didn't in fact complain to the dealer. But that in itself would be newsworthy in my view.


Sorry to hear that.

Last stat i saw was something along the lines of 3k people dieing in crashes everyday. I personally "almost crash" at least 4 times a day. Driving is a dangerous game.

You almost crash at least 4 times a day?? I have no stats but I don't think that's normal. You should be driving much more defensively if that's the case.

The drivers around here are very aggressive while at the same time bad at calculating cause and effect of their actions

Yikes. Not sure where you are located, but that sounds pretty frightening. Best wishes!

How much space do you keep in between other drivers and yourself?

Its not about space. Its about how quickly the space can change from driving along to full on collision if a mistake is made by myself or several hundred other cars.

You have more time to react when you have more space

other drivers will use the space to cause an accident. There is this tendency by some drivers to fill gaps. it doesn't matter how much space you have, its you versus all the other cars. Not just the car in front of you but the car behind and beside you on both sides.

US: For 2016 specifically, National Highway Traffic Safety Administration (NHTSA) data shows 37,461 people were killed in 34,436 motor vehicle crashes, an average of 102 per day.

According to the World Health Organization, road traffic injuries caused an estimated 1.35 million deaths worldwide in the year 2016. Or ~3700 per day.


> I personally "almost crash" at least 4 times a day

You probably should not be driving.


I'm increasingly of the opinion that technology like this will never see broad regulatory approval. I just can't see an argument where any agency will be comfortable with acceptable losses, and I don't see any endgame in which this technology does not result in an infinitesimal, but non-zero amount of fatal errors.

On a regulatory basis true self driving only needs to outperform humans. Humans are good at driving but imperfect we have a nasty tendency to get ourselves into accidents, both fatal and non.

My thought was that the public would fear and refuse to use the technology but some brave souls are jumping right in.

My opinion: full self driving is only a matter of time. At some point in the future insurance companies will figure out that self driving is less dangerous than human drivers and start offering cheaper premiums and deductibles to self driving only cars.


"At some point in the future insurance companies will figure out that self driving is less dangerous than human drivers and start offering cheaper premiums and deductibles to self driving only cars"

Where does the faith in the tech and its future come from though? Wouldn't it be more logical to wait and then say "hey, Geico is offering a 20% discount if I buy a new car with these features, guess they must really make a difference" rather than going around proclaiming how much safer they are in advance?


> Where does the faith in the tech and its future come from though?

Well now that you've put me on the spot like that and I have to think about it: baseless I suppose. Just a general faith that technology always improves.

> Wouldn't it be more logical

Yes, it would.


"Just a general faith that technology always improves."

People seem increasingly angry about technology and tech workers though.

So it seems like the climate is oscillating between extremes. Maybe people are getting disoriented.


> Where does the faith in the tech and its future come from though?

https://economictimes.indiatimes.com/small-biz/security-tech... (The Cambrian Explosion in Technology, and How It's Affecting Us | Seeing the progress made across various fields over the last few years, we can ask if we are witnessing a Cambrian explosion in technology today)


The facts around current car safety is that it's already really quite good. In modern cars and "autopilot-feasible conditions" you are talking well below 1 fatality per billion vehicle miles travelled with regular human drivers.

This means that if a model has sold 1 million cars they each need to drive 100 000 miles with autopilot enabled before the insurance company has enough statistics to say "this is safer than a human".


They might be able to extrapolate from non fatal accidents because they care more about damage which costs money to repair than fatalities. But I take your point - a lot of miles need to be traveled before you'd want to build it into your actuarial models.

No, the "facts" you keep reading about (from the same companies trying to sell you on the technology) are extremely misleading.

Tesla for instance does that statistic against "average driving" but the "Average driving" happens in cities. And most Teslas enable Autopilot on highways, where Tesla recommends enabling it, too.

Accidents happen much less on highways, so of course this "statistic" looks better. Put Autopiloted Teslas in the cities and then see how that statistic fares. My guess is it will become much, much worse.

The more real statistic is that even Waymo, which is about an order of magnitude better than anything else on the market, has an "incident" where a human driver would need to intervene every 5000 miles. For everyone else, a human driver would need to intervene every few hundred miles.

That's far from the "self-driving" technology we were promised.

Two relevant posts from someone that used to lead the Waymo project, before it was named Waymo:

https://www.forbes.com/sites/bradtempleton/2019/06/10/gmcrui...

https://www.forbes.com/sites/bradtempleton/2019/04/18/are-ro...

https://www.theguardian.com/technology/2017/apr/04/uber-goog...


It pains me to see you banned. You've been with HN a long time.

I compared your recent comments to 6 months ago. They seem a bit better now.

Why not just email and apologize? We all have bad days. Why let a string of bad days tank your 8-year history?

Regardless of what you decide, I wanted to leave an encouraging comment. At least one person is thinking of you and cheering you on. Good luck.


If you go back to stirring up drama on HN with offtopic meta comments, we will ban you again, much sooner than the years-long, hundreds-of-emails, dozen-accounts process it involved last time. That was more agony than any other user has single-handedly managed to cause on this site, and we won't go through it again. No more of this please—nothing of the kind—nada—period.

The saving feature is that there are hundreds of other countries.

Others will lead the way, even if the US succumbs to regulatory paralysis.


I have a car with the "autopilot" feature like the Tesla, but they call it adaptive cruise control and lane assist. The adaptive cruise control works pretty well; I guess it only needs to check the radar to see what's in front to slow down. The lane assist is problematic in recognizing different kind of lanes and would steer off course from time to time. I guess the current state of "autopilot" technology is simply not there.

Subaru? I gave up on lane assist because of a turn heading home from my old office. It really wanted to drive off that bridge and I didn't trust it to recover properly. It seemed safer to just turn it off after that.

It's another Japanese brand.

Probably Honda. My Civic has both plus Crash Avoidance system. All are very handy and work surprisingly well. I wish automakers would spend more effort on developing these driver-aid systems to increase safety instead of chasing the pipe-dream of self-driving cars.

A lot of rental cars I've used in Italy have this lane assist feature. Italian roads often have... inconsistent or incomplete... markings. After the first day driving I had to Google how to switch it off, because it tried to crash the car several times. I don't understand how these features can be launched in this state.

Anecdotally to the contrary, in California where the lane markings are quite good in most areas, the Tesla AP works great.

It would seem sensible for the "autopilot" to have knowledge of where the lane markings are known to be reliable, and only enable itself there. The vast majority of the world's roads have lane markings that absolutely cannot be relied upon.


At the risk of sounding callous: how many Tesla drivers have previously complained about autopilot and not been killed whilst using it?

How many non-Tesla drivers were killed in the same timespan by driving manually?

This seems like a very naked appeal to emotion, and not relevant to the situation.


According to this article: https://www.caranddriver.com/news/a30877577/driver-tesla-mod...

"Data from his phone indicates that a mobile game was active during his drive and that his hands were not on the wheel during the six seconds ahead of the crash."


Gotta love how quick Tesla is to throw their customers under the bus using telemetry data the customer cannot access on their own..

Their customer is perfectly aware of whether he has a video game going on in his hands instead of a steering wheel. No telemetry needed from that perspective.

Yeah but it is auto pilot! It can drive itself!!

You have no idea how many Tesla drivers will openly admit to taking their hands off the wheel and eyes of the road in order to take off a coat, tend to a child, etc. I work with somebody who claimed to do just that routinely.

Here is somebody from a few days ago bragging about taking their hands off the wheel and eyes off the road: https://news.ycombinator.com/item?id=22231927


People already do the things you listed, though. Hell, when my kids were in car seats I frequently had to reach around behind me to soothe them via touch. AutoPilot would have been a godsend, because I wouldn’t have had to stress out so much about juggling two things at once.

This is the part where you call me irresponsible, right? It seems like every discussion on this topic follows that pattern. The fact remains, though, that humans are in cars, and have all the weirdness that goes along with it. AP makes it easier. A lot.


Not much of a leap between that and playing a video game with your phone, honestly.

You know, it’s almost like we’d benefit from having an autopilot system. I mean, think about it: people drive drunk, or tired, or get in fights with their mom on the phone, or have a crying baby demanding attention, or any number of other things like that.

Wouldn’t it be awesome if we had something to help with that? Like, it doesn’t even have to do everything, just most of it, and keep you from running into shit.

That would be awesome, wouldn’t it?

Yeah, that exists. It’s not perfect, but it’s better than what we had before, which was nothing.


You can wish all we want but you don’t have an auto pilot system. The Tesla is auto pilot up until you actually do what their marketing (and fans) claim it does and get in an accident. Then both Tesla and the fanboys come out of the woodwork denying that it was ever intended to be used hands off even for a second. In fact Tesla goes way out of their way, presenting telemetry from the car itself “proving” the driver was not in control and thus it isn’t Tesla’s fault because the driver was an idiot and down in the fine print on line 42 section (B) it says Tesla isn’t an auto pilot and the driver should be in command of the vehicle at all times.

See also, this very thread. All though there seems to be a worryingly high number of people here and elsewhere who, like I said, brag about not paying attention to their driving in a Tesla while simultaneously claiming it isn’t actually “auto pilot” because airlines pilots have to pay attention or some hot garbage. You yourself openly admit to not lacking attention to your driving because you think the Tesla is in control. Basically, trying to have their cake and eat it too... kind of scary.


You are focused upon strawmen and arguing in bad faith.

It is a scientific fact that AP is safer than not having it. Statistically, you are less likely to get in an accident if you have it. Anecdotally, I have witnessed this with my own eyes, having had it take drastic action before I was even aware of the danger.

I’m done with you specifically, though. I’ve come to believe that anyone who uses the word “fanboy” is almost never someone you can have a productive conversation with.


Drive drunk? Get a cab. Tired? Get a motel. Fight with your mom on the phone? Park your car and continue the conversation. Baby demanding attention? Park your car and care for the baby. Ignore it until you do.

It would be awesome if we have something for that. For now we don't, there is not a system where you can do the tasks above responsibly while in the driver seat. So park your car first.


> This is the part where you call me irresponsible, right?

Well, yes. But let's say everybody does things like this: the problem is that on a normal car you'll try to keep the off-hand time at a minimum, keep half hand on the wheel, or just give up and stop the car.

If you rely on autopilot you'l could just lower your attention for a longer time, and you can't rely on autopilot to actually do the right thing, it may work fine ten times and then crash your car into another on the eleventh killing both you and your baby.

The stress you feel is a good thing.


So are you saying autopilot is, overall, a bad thing? Because it seems that the argument you are making is that the world is worse having it available, an argument I find to be perplexing.

Yes at the moment. But tats just my personal opinion i have because a lot of people get false sense of security using it, and stop paying enough attention.

Look at this guy. He even apparently noticed this behavior on this road before, but still wasn't paying attention and got killed.

You either need 100% self driving or 0%

(driving assist is fine, as long as it's assist, and requres your attention)


The real issue with Autopilot is that it is a 99.99% solution, which makes people feel safe. But actually 99.99% isn't good enough.

99.99% as in, roughly one in 10000 times when you take your attention of the drive (e.g. to soothe a child) you'll have an issue where you should intervene with the autopilot. The likelihood of someone actually having this happen this is low, let alone the number of times people don't notice and other drivers solve the situation. As such, many people feel totally comfortable doing it. At the same time, the roads would be a lot safer if people did not feel comfortable doing this stuff.

There is the note that, when you feel you absolutely have to take your attention of the road, having auto-pilot is a lot better than not having it. In the end, it is hard to say which of these effects is stronger. However, the fact that auto-pilot causes some people to not pay attention because they feel safe is simply a bad thing.


This is mean spirited. The customers family is not perfectly aware.

You're missing the point. The family does not know this nor do they have access to this data which would be extremely useful in determining what their legal options are.

It wasn't Tesla.

NTHSB investigated the crash and got this particular information with Apple's help.


Calling it autopilot, full self driving is borderline criminal.

Borderline?

Autopilot is an appropriate term, taken from aviation. And no one calls it full self driving since that hasn't been released yet.

Seriously, what is people's problem with that word? Tesla actively explains how this works currently (both in person and also in bold font on their website) when anyone buys a car. There is no owner of this car that thinks that it has full self-driving capability. It is literally a non-existent problem.

Because nobody reads the fucking manual. It's like reading the headline only.

> Seriously, what is people's problem with that word?

You see this with every thread about tsla. Some grasp at any nonsense to attack it. I used to think the haters were financially motivated "investors" shorting tsla and I'm sure it was true in the past. But now, I'm thinking it's just people who dislike elon for some reason since they also attack musk/SpaceX in spacex threads. I mean "criminally responsible" for the autopilot name? It's hard to take them seriously at this point.


While that is my view as well, the behaviour of some Tesla drivers indicates that their view is different, and they use the autopilot in a different manner, which is a problem.

You're saying this on an article about two different people who seemed to believe it and paid dearly for their misconception.

We also know that the misconception is widely held. The term itself is misleading (https://www.iihs.org/news/detail/new-studies-highlight-drive...) and Tesla's marketing so much more so.


Wikipedia: 'Autopilots do not replace human operators, but instead they assist them in controlling the vehicle'

What telemetry do you mean? We are talking about data obtained from the iPhone (Screentime?), not from the Tesla.

The family is suing Tesla. Would you rather this evidence was presented in court after the family have already spent thousands in lawyer fees?

This doesn't really solve the PR problem for Tesla, though. If people continue to treat their cars like they're fully self-driving (they will), accidents like this will continue to happen

What Tesla is doing is like selling a space heater that emits carbon monoxide and declaring it's for outdoor use only


According to social media, YouTubers, Redditors Tesla's drive themselves and are not simply automobiles with electric propulsion but computers on wheels, they have all the necessary hardware built-in and the software gets a miracle updates over the air, it is called autopilot, there're people sleeping on the highway while their Tesla drives them home, machine learning and AI is about to automate everything and you are short-seller devil of the petrol industry if you question anything.

How fair is to say that the driver should have had their hands on the wheel at all time and checked and acquired a pilot license or at least studied what is autopilot?

Does Tesla's have driver attention detectors and warning systems? Is the car beeping or something like that when you don't hold the wheel?


Yes, the car is making noises when you don't hold the wheels and after 3 warnings, the Autopilot system is disabled until the car is restarted. The Autopilot is very explicitly not suitable for unobserved operation.

How do people sleep or play on their phones then? Are the warnings not frequent and prominent enough? Are people hacking it and disabling it? What's the deal here?

The car detects your hand on the steering wheel. So technically speaking, as long as you keep one hand on the wheel, you can use your phone with the other hand :p. There are also devices avialable for purchase, which convince the car that there is a hand on the wheel.

What I forgot to mention in the previous post: the autopilot does not only complain about hands off the wheel, but if you ignore the complaints long enough, it does stop the car. So people can't drive for long with their hands completely off the steering wheel, until they use some kind of defeating device.


> So people can't drive for long with their hands completely off the steering wheel, until they use some kind of defeating device.

Is that what happened in this case then? Serious question; never driven a Tesla. https://www.paloaltoonline.com/news/2018/11/30/los-altos-pla...


As the driver was still unresponsive until woken by the police, I would think it was the autopilot eventually stopping the car.

They stopped the car by driving in front of it to trigger the braking sensor.

That would work too :)

You just need to bump the wheel every 15 seconds or so. The required interval seems to vary a certain amount, but in boring, easy, bumper to bumper traffic, where it makes the most sense, that's all that's required, in my experience at least.

> it does stop the car

More specifically: in my tests, it eventually disengages autopilot as well as adaptive cruise control (which only controls the break and acceleration) while beeping loudly and harshly. This has the affect of slowly causing the car to decelerate.


With a Tesla you hang a banana off the steering wheel. The weight tricks the system into thinking it is a hand.

With GM's Super Cruise you can go completely hands free, that's the intended use, but it has eye tracking to make sure you are constantly watching the road.


I am intrigued by Teslas, and am considering purchasing one. However, I would feel better about the purchase if there was absolutely no autopilot software or hardware on board. Ideally that would mean zero, including non-autopilot builds of all system and control software so there is no chance the car will ever engage autopilot.

Is it possible to buy a Tesla completely devoid of autopilot, or is this type of accident potentially a risk in all of them?


Can't you just not use it? The driver in the crash activated autopilot intentionally shortly before the crash. He did that and then took his hands off the wheel. I can't comment on autopilots general reliability but using it appears to be entirely optional...

As a software engineer it's hard to trust code that you know has flaws, because there could be a flaw that causes autopilot to automatically turn on. Extremely unlikely, I know, but it's definitely a nagging thought. It would be nicer if there were a mechanical cutoff (not really possible in this case) so you could be absolutely sure.

I know, I know -- you can design software so that this type of flaw is completely impossible, but as a user I have no way of verifying this, and my trust has already been eroded by the feature I'm trying to disable so I really want to verify it.


Apparently it's a $7,000 option.

That can be enabled remotely: "Full Self-Driving Capability is available for purchase post-delivery, prices are likely to increase over time with new feature releases"

That means the hardware and software are in the vehicles for purchase via the website. Hence my question.


It is an extra cost option. Don't specify it and it won't be enabled. The hardware and software are still present in the vehicle, just turned off.

There's never been a Tesla incident, that I know of, where the system has turned itself on. Every single crash has been after a deliberate engagement by the driver.


Unintended accelerations in Nissans were blamed on bit-flips due to cosmic rays. I think that one actually was true in the end. So unless you'd like to test the CPUs near nuclear reactors and electron guns, I'd say that there is a (cosmically) small risk of it turning on despite not enabling it.

Wow, that's crazy! I didn't even think about that, but I can totally see how it could be possible. I imagine that the only reliable way to prevent it would be to have multiple redundant systems that operated via consensus.

Or, since the Tesla dev teams move much faster than Nissans, there could just be a bug that causes it to engage when it's not wanted.

As programmers, we know that the only way to achieve 100% certainty that code will never be executed is for it not to be present at all. I'm specifically asking if that safety feature -- complete absence of autopilot -- is available as an option.

Every tesla you can buy new has autopilot hardware and software for level 2 autonomy features. Certain older builds have different hardware.

There is an additional cost you can pay for called "Full Self Driving". This fee enables some new functionality like summon mode and the car changing lanes itself when you indicate a lane change. This fee is borderline fraud as people who leased early model 3 cars didn't receive anything close to the functionality described.

Autopilot is not ever active unless the driver enables it. You pull a stalk to turn on adaptive cruise control which only manages your speed and doesn't do any steering. Another pull of the stalk turns on autosteer, which is really just fairly fancy lanekeeping assist. You can buy a tesla and never use autopilot. There is some automatic emergency braking you can't disable, but that's true of nearly all new vehicles.


Autosteer is also disabled by default and must first be enabled in the settings menu while the car is in park.

If you want a hardware cut-off, it's not available. Don't buy the car.

Most people either feel comfortable with a software cut-off or feel comfortable with it present.


"Safety remains Caltrans top priority," he said.

Where these Corp dummies are trained to claim that <the thing we totally screwed> is our top priority? This keeps repeating after every incident exposing ignorance and lack of care and attention to essential things.


This happened 2 years ago.

It was a left-hand exit (why are those even a thing?).

The seperator did not have contracting protectors because a car had previously crashed into them, I wonder why.

No one ever dies in car accidents.

Hence, Tesla is super bad for marketing its autopilot according to HN.


There's a left-hand exit there because there's an interchange between freeways, and the HOV lane exits on the left, to enable free flowing HOV traffic to avoid merging through general traffic to get to a right exit.

At this interchange, I've seen lots of human drivers change between the exiting HOV lane and the continuing HOV lane (or vice versa) much later than is safe; presumably because they were surprised that their lane was exiting, or they noticed the exit too late, but didn't want to miss it. I haven't seen anyone drive in between the two lanes as if it was a lane, though; and usually the late lane changers are braking, not accelerating.

It is unfortunate that the crash attenuator wasn't reset. Looking at the docs for a similar attenuator[1], it seems like resetting is somewhat involved, and you'd need a trained and properly equipped repair crew to do it, even if the time required is not that much. Scheduling is probably an issue.

[1] https://www.dmtraffic.com/assets/sci_smart_cushion_design_an... page 9-13


Complains to his wife that Autopilot steers him towards concrete barrier; continues to use Autopilot.

Sigh. It's like the NTSB's complaints about the FAA: tombstone, reactive regulation rather than proactively implementing recommendations.

I don't mean to dogpile on Tesla in particular, I think autonomous driving will eventually be far, far safer in the long-term than human-error piloting, but it won't be outlawed because of egos.

>>> Q: Is it still the case that there are several classes of insufficiently-solved autonomous driving problems? IIRC one of the big ones is/was recognizing stationary objects in the motorway when traveling at high speed (80 mph / 129 kph). Shouldn't one of the "prime directives" of autonomous driving be that it cannot operate at speeds higher than it can stop or avoid objects and people? These conditions include blind curves in the hills, children darting out from in front of parked cars, fog, rain or headlights/darkness.

IIRC, is another difficult problem recognizing pedestrians or people on strange modes of transport, such as winter clothing, different heights, skin tones or doing things like being on stilts or riding a unicycle?

PS: when a type of system becomes relied on pervasively, so-called "edge-cases" become everyday occurrences.


That would be all speeds. The worst case is that the computer doesn’t recognize an object as being on a collision path until after impact is imminent, which could be due to the object being filtered out as rain, classified as vegetation, improper predicted trajectory... any time you set up rules for self driving cars, things get very fuzzy very fast.

As a software engineer, all I have to do is think that my colleges wrote the auto pilot code, and that would keep me from ever turning the feature on.

This kind of thing makes me think software engineering should require an actual accreditation like a formal engineering job does.

Try to avoid putting your life in the hands of closed source software.

Wouldn't that make it impossible to drive almost any modern vehicle? Or fly in any modern passenger aeroplane?

Or use the healthcare system

Ok i'll just stop using airlines and GPS, i guess.

I don’t know why open source would make much of a difference in this case. It’s not like open source means guaranteed correctness.

There are going to be Autopilot crashes and people should disable it if they uncomfortable with putting their lives in the hands of Tesla's software. I would be. But I see no reason to not believe to not believe what they have on https://www.tesla.com/VehicleSafetyReport

It shows Teslas have much fewer accidents than the average car, and that Teslas with autopilot enabled are substantially safe per mile than with autopilot disabled (25-50% more miles driven per accident).

The article says a Prius hit the same place as this Tesla the week before. But the NTSB isn't investigating Toyota.


Teslas are more expensive than the average car (hence might be driven more carefully), and autopilot is more likely to be enabled in predictable driving environments like highways.

> more likely to be enabled in predictable driving environments like highways.

Which is what it was designed for and which is a much easier problem to solve than inner-city driving.

Which IMO I feel like more self-driving companies should have focused on first to perfect their systems. We used feature phones before smartphones for a reason, hardware/software is best developed incrementally.


hence might be driven more carefully

I don't know where you get that idea from, when I'm out cycling it's generally the people in expensive cars that don't pay much attention.


You’re more likely to notice expensive cars, and also more likely to notice cars with unsafe drivers.

If you look at the stats, luxury cars are involved in less crashes per miles driven.


assuming those numbers hold when scaled up, you are compressing the actual distribution into a single number (an average). Overall that high-level number may be correct.

But at the same time, the kinds of people that get into accidents will be a different distribution.

How you drive factors a lot into how likely you are into getting in an accident, even allowing for the other drivers.

If the distribution becomes essentially random then you might actually have a higher chance of getting into an accident than before, if you were a very careful driver.

(Obviously those that benefit the most are the bad drivers since they'll no longer be bad)

Just remember that whenever you summarize a distribution into a single number like an average, you loose a lot of information in the process.


I'd be careful to compare those numbers - some sort of confirmation bias. AP is only used in "AP-easy situations" the harder to drive parts are steered by the human driver.

Your claim relies entirely on the veracity of Tesla's safety report. Unfortunately I am inclined to mistrust any such vehicle safety claims directly from Tesla's own website. There is a conflict of interest in that they are incentivized to publish the most favorable numbers. We should be rightly cynical and rely only on numbers from an independent third party, as we would normally do for other companies with a less favorable reputation. Until we have such independently verified stats, we can't take those claims at face value.

I agree that anyone who feels uncomfortable with Tesla's ADAS should simply turn it off. But don't you think it's unfair for the company to place an unfinished product in the hands of a consumer and pass off the risk-management responsibility to them? Tesla's cavalier attitude towards autonomous vehicle safety is concerning to me. They seem to adopt the "move fast and break things" approach, whereas teams like Waymo and Cruise are releasing things in a slower and more controlled manner. And not coincidentally, they've have far less accidents that way.


If they truly believe that the autopilot driving is safer than unassisted, they might reasonably consider it an immoral act not to ship it. Given the number of lives lost in auto accidents every day, the issue is not as clear-cut as you're suggesting. It's a real-world trolley problem.

> If they truly believe that the autopilot driving is safer than unassisted, they might reasonably consider it an immoral act not to ship it.

Then Tesla should be given it away, not charging extra for it. To do otherwise would be immoral.


That’s an absurd suggestion. Every car company charges more for advanced safety features

that's exactly my point, morality has nothing to do with it

They do.

Autopilot is standard in every single car Tesla sells now, for exactly this reason.

Here is the blog post on their website where they talk about making it standard: https://www.tesla.com/blog/update-our-vehicle-lineup


As well as bundling it prices have been bumped up, to me that is not giving it away for free.

Sorry I misunderstood your previous comment. I assumed you meant that they should include safety features for no extra cost, which they do, not that they shouldn't charge anything at all for safety features, which is such a weird and extreme take that I'm not sure what to even say...

my original comment was around the point of mixing morality and business.

> I assumed you meant that they should include safety features for no extra cost, which they do

I could buy a car without it for 37500 and now I can't but I can now buy a car with it for 39500 and you would still consider this is no extra cost?

I must be misunderstanding what you are saying


There’s no moral problem with increasing the price when it becomes a better product.

There arguably is a moral problem selling a product with optional safety features that cost extra.

If tesla invented immortality fields within their vehicles, I have no doubt they would jack up the price, and that would be fine, because it’s now a much better car. To think otherwise would be silly.


No. If they know that it’s safer in some respects but unsafe in others they should make sure it’s only used as an assistance system and make sure the driver is still watching the road when it’s turned on.

What’s immoral is that they allow people to turn this thing on and then play Candy Crush on their phone while the car does the driving, fully aware that this might lead to fatal accidents in some cases. Their marketing is misleading because they suggest that the system is safe and they don’t inform customers about the freak accidents that can happen when the system glitches (bad for sales). That’s not very ethical if you ask me.


They require pressure on the steering wheel and make it quite clear that you're responsible for taking control at any time when you activate autopilot.

You and I probably agree that this might be wishful thinking to expect some portion of the driving public to do so faithfully.

Perhaps additional technology can help better guarantee attention and participation on the part of the driver.


The AutoPilot is marketed as a safety feature like your link asserts. The issue though is that Tesla's software defect(s) is what killed him. NTSB would be investigating Toyota if their hardware or software defect(s) would've killed people.

In Tesla's case, AutoPilot's lane keeping feature did not keep the car in the lane. Instead it drove the car into concrete at 71mph killing the driver. More over anti-collision feature did not stop the car from colliding with the concrete barrier. See where the issue lies?

Will you tell the people who faced safety issues with their Takata airbags to get rid of them? or the Toyota owners whose vehicles had unintended acceleration that they should rip out their accelerators?


Accidents without AP vs Accidents with AP isn't what's interesting.

Accidents without AP's overreaching half-baked features meant to serve as a marketing gimmick, but with safety technology that's been iterated for almost 20 years like LKAS and AEB, and is now getting affordable, is what's interesting.

Now a Corolla, the most conservative car in the history of mainstream cars, can return you to your lane and warn you if it feels you're making too many mistakes due to being tired.

These features save lives when they work, and more importantly, do not kill you when they malfunction, and do not lull you into a false sense of safety nearly as actively as a system literally named "Autopilot".

AP could literally be the same system, not pretend that lane centering is so easy to do, be much more aggressive about disengaging like other manufacturers (see SuperCruise, which literally tracks your eyes to ensure focus is maintained), and be infinitely more palatable an experiment than it currently is

-

If it seems I'm being aggressively anti-AP here, it's because I didn't sign up to be a part of this beta, yet I have to drive on the road with these people.

I like that a bored developer can't push updates to the lane guidance in my cars. AP has had regressions that brought back deadly bugs.

I'll take a crusty system that was put through thousands of hours of testing then never "improved" again over that any day.


I do believe autopilot has the potential to be safer than driving manually. But I'm not sure what to take out of that safety report.

(1) Comparison against all other cars in US (and across all states) just introduces lots of sampling bias. Teslas are like much more common in states like CA, than say Tennessee. Tennessee accounts for much more accidents in general than California and that adds up in overall stats, but not in Tesla stats.

(2) What kind of variation do we have in Tesla drivers vs other car drivers. As comment above mentions, Tesla drivers may be more responsible drivers.

(3) What accounts as accident? Minor collisions (fender benders) vs Major collisions. More specifically, I want to know, how does Autopilot react when it encounters something it has never seen before (like the crash in article). Humans can handle things pretty well IMO given they are not way beyond speed-limit, and all kind of random shit happens on road.

(4) Autopilot vs Manual Tesla stats is promising IMO. However, a little more information would be nice to have, like proportion of 1-billion miles logged as autopilot miles.


I can’t remember if it was just for fires (comparing ICE to electric), but I believe the rates are much closer when you compare to newer cars. Tesla’s are obviously newer and the report does not take this into consideration.

There’s also a sampling bias for the kinds of roads where Autopilot is available, and the kinds of circumstances where people are most likely to trust and use it.

The report is misleading because AP tends to be used far more on controlled access roads which have fewer crashes per mile. There are other biases too but this alone could easily explain more than the observed 1/3 risk reduction.

> The article says a Prius hit the same place as this Tesla the week before. But the NTSB isn't investigating Toyota.

Makes sense to investigate Tesla since it's always the same driver causing the accidents. It still makes sense even if this driver is safer than everyone else.


Accidents happen more frequently in terms of miles driven on city streets where Autopilot isn't being used. Tesla is playing with numbers to make themselves look good. This false sense security is part of the problem. Autopilot needs to be closely monitored and isn't a substitute for a human a driver. I use autopilot everyday and there are cases where if i wasn't monitoring it, it would ran me into a stalled car or merged me into a concrete barrier.

> It shows Teslas have much fewer accidents than the average car

Tesla’s Driver Fatality Rate is more than Triple that of Luxury Cars (and likely even higher)

https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...


Autopilot is only going to be enabled on freeways, where accident rates will always be much lower than surface streets. Not a useful statistic, even if true.

Tesla publishes their safety numbers in terms of accidents per x millions of miles driven. If you believe their numbers then Autopilot in its current configuration is clearly safer than a purely human operated vehicle. Having driven on Autopilot for about 85% of the total time I’ve spent in a Tesla, its been a life changer. But obviously YMMV.

It really sucks that this specific interchange has cost so many lives - and it’s apparent that there’s some frailty of the code that makes autopilot vehicles more susceptible.

In general when it comes to crashes on Autopilot, it’s important to keep perspective. If you take your hands off the wheel and look at your phone for 6 seconds in any other car you’re going to have a bad time. On Autopilot, it took a confluence of bad road design, poor road maintenance, and an unlikely software fault to initiate a crash.


> If you take your hands off the wheel and look at your phone for 6 seconds in any other car you’re going to have a bad time.

Which is why you don't do it in any other car. I don't know much about autopilot, but because we as humans don't know when it can go wrong, it's easier to get it into situations it wasn't designed for. This is not true for the 6 seconds any other car scenario as we will be predicting what the steering wheel and traffic in front of us might do.


Very true. Tesla has done a lot to try and make it clear to drivers that they must remain vigilant - in this particular case it sounds like the driver ignored several auditory tactile warnings to resume control of the vehicle, in addition to the standard warnings when Autopilot is activated. Short of changing the name I think they’ve done everything they can to manage expectations for Autopilot once you’re in the car, but humans don’t always work like that.

I do think changing the name would be the responsible thing to do.

If you call something a knife you can't 100% blame the customers if they try to use it to cut stuff, even if you say "Knife™ should not be used for cutting"


> If you believe their numbers then Autopilot in its current configuration is clearly safer than a purely human operated vehicle.

And you shouldn't, because their numbers don't adjust for demographics, type of road use, or driving conditions.

Do Ford Focus drivers have more accidents than Tesla Model S owners? Yes. Is it that down to the fact that Tesla owners are primarily middle-aged nad Ford Focus's are owned by spotty teenagers? Who knows.

There are fewer crashes when you're on autopilot. Is that because Autopilot is better than humans or is it because Autopilot only gets enabled when its safe? Who knows.


I avoid new technology, exactly because I'm an engineer.

I wonder if that is just me, but when new technology is introduced and hyped, I usually take a quick look at implementations, research, and talks just to get an idea of what the state of the art really is like.

As a result, I have become the late adopter among my group of friends because the first iteration of any new technology usually just isn't worth the issues.

You know this effect when you open a new book and immediately spot a typo? I felt that way looking at state of the art AI vision papers.

The first paper would crash, despite me using the same GPU as the authors. Turns out they got incredibly lucky not to trigger a driver bug causing random calculation errors.

The second paper converted float to bool and then tried to use the gradient for training. That's just plain mathematically wrong, a step function doesn't have a non-zero gradient.

The third paper only used a 3x3 pixel neighborhood for learning long-distance moves. Doesn't work, I cannot learn about New York by waking around in my bathroom.

That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background. AI is stochastical gradient descent optimization, after all.

Thanks to TensorFlow, it is nowadays easy to try out other people's AI. So I took some photos of my road and put them through the state of the art computer visible AIs trained with KITTY, a self-driving car dataset of German roads. All of them couldn't even track the wall of the house correctly.

So now I'm afraid to use anything self-driving ^_^


Optimistically; maybe everybody is hiding their 'secret sauce'. But I expect you're probably right.

I was optimistic like that in university. Great times :)

But then I went into consulting and saw that big companies have teams of lawyers that settle proactively out of court to keep inconvenient truths out of the public opinion.

Like the first few exploding iPhones.

(Just an example, never worked there)

BTW, did you hear about the Uber crash where their AI couldn't track a pedestrian and then killed her?


That’s an unfair characterization if you’re talking about the crash in Arizona. Perhaps: “Did you hear about the Uber crash where the AI couldn’t track a pedestrian jaywalking at night across the middle of a nearly pitch black two lane highway and then killed her?” would have been more apt. https://youtu.be/ufNNuafuU7M

That being said, I’m a huge skeptic of the current state of self driving cars. I would have assumed these systems use LIDAR as well as vision and could have at least slammed on the brakes.


A more precise narrative, as shown in the evidence submitted by Uber to NTSB (as opposed to the PR video), is "AI saw something for 6 seconds, but because it couldn't decide what it is, it drove through. Turned out to be a human."

It only looks pitch black in Uber's badly-exposed dashcam video. See https://news.ycombinator.com/item?id=22307013

oh wow, that is certainly different than the news video which I assumed was relatively unbiased

Police concluded that given the same conditions, Herzberg would have been visible to 85% of motorists at a distance of 143 feet (44 m), 5.7 seconds before the car struck Herzberg.

A vehicle traveling 43 mph (69 km/h) can generally stop within 89 feet (27 m) once the brakes are applied.

The police explanation of Herzberg's path meant she had already crossed two lanes of traffic before she was struck by the autonomous vehicle.

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

I was also misled by the poorly exposed "official" video. Given the numbers above there was time for a human driver to see her and even come to a complete stop. Further since she was moving from one side of the road to the other and only entered directly into the vehicle's path in the last 1.3 seconds (image in "Software issues" section of wikipedia article) it is likely that all that would have been needed to avoid the collision would have been a minor slow down and she would have completed her crossing safely.


I hate that people attribute that accident to a visual issue because of that video. It wasn't a visual issue. It was 100% a programming issue and everyone involved should be criminally liable for negligence, IMO.

I design medical devices and have the exact same opinion. It's the same logic as avoiding the first model year of production when buying a used car.

I design industrial automation equipment and have much the same feelings about those devices. I've programmed robot cells, with redundant software and hardware safety systems. I've run through the checklists to make sure the operators are SIL-3 safe with a high-speed high-power robot running just inches away through a piece of 1/4" polycarbonate - there to keep the operators out, it won't keep the robot in. I know all the engineering that goes into those systems, and that's why I will never ride a Kuka-coaster (a chair bolted to a 6-axis robot).

Also, I'm quick to reach for a simple pneumatic cylinder to solve a problem. Perfectly capable of using new electric-servo-ballscrew-hotness to do a similar move, but the value provided by tried-and-tested systems is hard to overstate.


> So now I'm afraid to use anything self-driving ^_^

I'm in the same boat but unfortunately you have to share the road with these things so it's hard to completely avoid them. I do find myself avoiding Tesla's on the freeway more and more.


I started avoiding Teslas on the freeway after watching the video of this guy putting on his makeup and shooting a youtube video while on autopilot: https://www.youtube.com/watch?v=cT_rEa4X1nA

Just curious: do you also avoid Subarus, Seats, BMWs, Audis and the other cars with adaptive cruise control? Or is it only Tesla drivers you consider this dangerous?

It's called Tesla autopilot. Average user is more likely to do silly things in cars that are advertised to have an autopilot rather than adaptive cruise control.

This resonates with me, and of course the classic xkcd on electronic voting https://xkcd.com/2030/

The difference is that electronic voting is a bad idea in principle, regardless of the implementation. Fully self-driving cars might actually be possible, but probably not with current software and hardware.

Do you mind explaining why it is a bad idea in principle? It feels like if we have a decent implementation (a big ask, to be sure), it would be safer and more convenient than the actual way.

Voting systems have a number of key requirements. To prevent bribes or coercion, the vote has to be anonymous and the voter must not be able to prove his vote. On the other hand, it must be possible to verify that each cast vote has been counted correctly. Finally, the whole process should be transparent and understandable for every interested voter.

These requirements can be easily fulfilled with a well designed paper ballot system. I don't see any chance of doing the same with anything computerized.


> To prevent bribes or coercion, the vote has to be anonymous and the voter must not be able to prove his vote.

But we don't apply this rule to the votes where bribery and coercion are most practical to start with, where there are a small number of voters that can be intensely targeted, and swinging a small number is sufficient to decide major outcomes.


I don't know where you got these papers but you clearly got an awful sampling and I don't think you're giving computer vision a fair evaluation.

Granted, we're not quite ready for self driving, but there's no question that the neural network subfield of ML has absolutely exploded in the last 5-10 years and is bursting with productionizable, application ready architectures that are already solving real world problems.


You sound like an NVIDIA salesperson trying to sell me on a $3000 Titan ;)

There is no doubt in my mind that an AI with billions of parameters will be excellent at memorizing stuff.

I also have no doubt that research activity has exploded, which might be related to the generous hardware grants being handed out...

But all that research has produced surprisingly little progress over algorithms from 2004 in the field of optical flow.

The papers I looked at were the top-ranked optical flow algorithms on Sintel and KITTY. So those were the AIs that work best in practice and better than 99% of the other AI solutions.


While it's not my area of expertise, I am a bit wary of contest results. It seems like an exercise in overfitting via multiple comparisons? Maybe some algorithms with a slightly lower rank are actually more robust?

If it's as bad as you say, it seems like a critical evaluation would be pretty interesting and advance the field.


I wonder how many solutions within the AI field could just be categorized as "Automation"

All of them. That's how AI works. Not by making smarter machines, but by destroying intelligence by smashing it into machine-digestible bits.

That's what caused the first "AI Winter": Rules-based "AI" engines became what we call "Business Rules". AI didn't go away - it just stopped being an over-valued set of if-then logic and slots (value-driven triggers) with a cool set of GUI tools to build rule networks.

Source: Used to work for an 80s-era "AI Company"


>There is no doubt in my mind that an AI with billions of parameters will be excellent at memorizing stuff

We're way past memorization. We're into interpolation and extrapolation in high D spaces with Bayesian parameters. Sentiment analysis and contextual searches - search by idea, not keyword. Heuristic decision making. Massively accelerated 3D modeling with 99% accuracy. Generative nets for text, music, scientific models...

Sorry, but you're behind the times, and that's ok - one of us will be proven right in the next 1-5 years. Based solely on the work we're doing at the startup I'm working for, we're on the cusp of an ML revolution. Time will tell, but personally I'm pretty excited. And don't worry, I'm not working in adtech or anything useless.

That said, the driving problem seems to be quite far from being solved, I agree though it is outside my expertise; but I think the primary issue is that this is an application where error must be unrealistically low, a constraint which does not apply to many other domains. You can get away with a couple percent of uncertainty when people's lives aren't on the line!


Would you be willing to link to some papers and cite some specific algos to play with? The above cited specific algos. What are the new versions of these named algorithms?

> Granted, we're not quite ready for self driving

And yet it's literally in cars on the road.

I'm not saying you're wrong because of that. I just wonder how far from "ready" we are, and how much of a gamble manufacturers are taking, and how much risk that presents for not just their customers, but everyone else their customers may drive near.


Based on tesla's safety report [1] it's already less dangerous than letting humans drive (alone). The error rate of human drivers tends to be downplayed, while the perceived risks of automated driving is exaggerated, distorting the picture.

Yes, it's a hard problem, yes we are not nearly there and there is a lot of development/research to do. Yes, accidents will happen during the process. But humans suck at driving and kill themselves and other people daily. It's the least safe form of transportation we have.

[1] https://www.tesla.com/VehicleSafetyReport?redirect=no


The gross human fatal accident rate is ~7 accidents per billion miles in the US, including fatalities caused by incompetent or irresponsible drivers, and substantially lower in Europe. But humans drive a lot of miles

Based on Tesla's safety report, 'more than 1 billion' miles have been driven using autopilot. Given the small data sample and the fatalities already attributed to autopilot, I think we're some way from proving it's safer than letting drivers drive alone, never mind close to being a driver substitute.


Marginal Revolution just highlighted an interesting detailed look at US car (driver) fatalities. https://marginalrevolution.com/marginalrevolution/2020/02/wh...

>> After accounting for freeways (18%) and intersections and junctions (20%), we’re still left with more than 60% of drivers killed in automotive accidents left accounted for.

>> It turns out that drivers killed on rural roads with 2 lanes (i.e., one lane in each direction divided by a double yellow line) accounts for a staggering 38% of total mortality. This number would actually be higher, except to keep the three categories we have mutually exclusive, we backed out any intersection-related driver deaths on these roads and any killed on 2-lane rural roads that were classified as “freeway.”

>> In drivers killed on 2-lane rural roads, 50% involved a driver not wearing a seat belt. Close to 40% have alcohol in their system and nearly 90% of these drivers were over the legal limit of 0.08 g/dL.

I don't think people give enough attention to whether broad statistics actually apply to cases of interest. That's about 40% of all driver fatalities occurring on rural non-freeway roads, of which 35% (~14% overall) were legally driving drunk.

People compare various fatality rates associated with riding an airplane vs driving a car all the time, but I've never seen anyone point out that an incredibly simple mitigation you're probably already doing -- not driving on non-freeway rural roads -- lowers your risk of dying in a car accident by more than a third. And it gets even better if you're not driving drunk!

If you measure driving quality in terms of fatality rate, it is actually the case that almost everyone is better than average. A lot better than average. But public discussion completely misses this, because we prefer to aggregate unlike with unlike.


You’re committing a logical fallacy here. Avoiding driving on those roads is only a mitigation if the accident rate is highly disproportional to their usage.

If half of all driving occurs on highways and half doesn’t, and half of all accidents are on highways, then avoiding highways will have absolutely no effect on your accident rate.

It’s possible that driving on these roads leads to a disproportionate accident rate, but you haven’t actually said that.


True. I think there's plenty of non-statistical reason to believe you can reduce your risk of death by not being one of the 50% of drivers involved in accidents on those roads who weren't wearing a seat belt or ~35% who are over the drink drive limit though.

> You’re committing a logical fallacy here. Avoiding driving on those roads is only a mitigation if the accident rate is highly disproportional to their usage.

You're right in spirit. I actually addressed this in passing in the comment "an incredibly simple mitigation you're probably already doing". Rural roads carry less traffic than non-rural roads for the very obvious reason that most people don't live in rural areas. The disparity is documented: https://www.ncsl.org/research/transportation/traffic-safety-...

We can also note that freeway vehicle-miles (excluded from this rural roads statistic) are going to be an inflated share of driven miles precisely because the purpose of the freeway is to cover long distances.

But as to the specific number I provided ("more than a third"), you're on target in accusing me of a fallacy.


What a surprise that Tesla's report would say that!

Have they released all the data to be analyzed by independent people?

Also autopilot only runs in the best of conditions. Are they comparing apples to apples?


That report is comparing humans driving in all conditions vs autopilot driving in only the best conditions. Humans are deciding when it is safe enough to turn autopilot on. So no, it is not less dangerous.

That's not what the report is comparing at all. The report is comparing all vehicles driving in all conditions vs Teslas driving in all conditions (separate for with and without autopilot).

The numbers show that Teslas experience a lower crash rate than other vehicles. Granted, this can be to a number of reasons including the hypothesis that humans deciding to buy Teslas drive more carefully to begin with. And the numbers show that turning on autopilot further reduces crash rates.

This at least tells us that letting the vehicles with the automated driving and safety features on the road doesn't increase the risk for the driver and others, which was the original premise I responded to.


There's a million hidden variables here that could explain the difference:

- The mechanical state of the car (Teslas with autopilot tend to be new/newish vehicles, and thus in excellent mechanical shape)

- The age and competence of the driver - I'm guessing people who make enough to buy a Tesla are usually not senile 80 years olds or irresponsile 18 year olds

- Other security gizmos in Teslas that cheaper cars may lack

Overall, it would be more fair to compare against cars of similar age and at similar price point.


And how much have Teslas driven in snowy fog in the mountains on autopilot?

I think the tricky part is that at some level you want to be comparing counterfactuals. That is, accident rates of Teslas on autopilot with a driver of Tesla-driver abilities, in road conditions where the accidents occur, and so forth.

It kinda seems self evident that a car that drives you into a wall randomly is less safe than one that doesn't.

I grant that Teslas might be safer than eg a drunk driver, and so we might be better off replacing all cars with Teslas in some sense, but we'd also be better off if we replaced drunk drivers with sober ones. But would safe, competent drivers be safer, and would that be ethical? At that point are you penalizing safe competent drivers?

Drunk drivers in Teslas are actually interesting for me to think about, because I suspect they'd inappropriately disengage autopilot at nontrivial rates. I'm not sure what that says but it seems significant to me in thinking about issues. To me it maybe suggests autopilot should be used as a feature of last resort, like "turn this on if you're unable to drive safely and comply with driving laws." But then shouldn't you just not be behind the wheel, and find someone who can?


Beware of the No True Scotsman fallacy. A human who drove into a wall could not possibly have been a Safe, Competent Driver, could they? A True Safe, Competent Driver would never drive into a wall.

Unless you're serious about bringing the bar way up for getting a driver's license, I think it's fair to compare self-driving technology with real humans, including the unsafe and incompetent. In most of the world, even those caught red-handed driving drunk are eventually allowed to drive again.


>Based on tesla's safety report [1] it's already less dangerous than letting humans drive (alone)

You mean the company that has staked it's future on selling this technology claims the technology is better than any alternative?

This is aside from the fact that the NHTSA says the claim of "safest ever" is nonsense and that there is zero data in that PR blog post.


Is there any chance that tesla is lying with statistics?

A fun example, someone was selling some meat, he said it is 50% rabbit and 50% horse because he used 1 rabbit and 1 horse. The conclusion is when you read some statistics you want to find the actual data and find if statistics are used correctly, most of the time as in this case the people doing the statistics are manipulating you.

There was an article about a city in Norway with 0 deaths in 2019, if I would limit my statistics to that city only, to that year only I will get the number of 0 people killed by human drivers.


I believe every car manufacturer has a disclaimer that the autopilot can only be used as an assist. That the driver needs to keep his eyes on the road, and ready to intervene at any given time.

Were not at the self-driving level of kicking back the seat and watching netflix on your phone yet.

I doubt we will ever get there; there will always be edge cases which are difficult for a computer to grasp. Faded lane marking, some non-self-driving car doing something totally unexpected, extreme weather conditions limiting visibility for the camera's etc.


> I believe every car manufacturer has a disclaimer that the autopilot can only be used as an assist. That the driver needs to keep his eyes on the road, and ready to intervene at any given time.

-This is the scariest bit, IMHO. Basically, autopilot is well enough developed to mostly work under normal conditions; humans aren't very good at staying alert for extended periods of time just monitoring something which mostly minds its own business.

Result being that the 'assist' runs the show until it suddenly veers off the road or into a concrete barrier, bicyclist, whatever. 'Driver' then blames autopilot; autopilot supplier blames driver, stating autopilot is just an aid, not a proper autopilot.

This is the worst of both worlds. Driver aids should either be just that - aids, in that they ease the cognitive burden, but still require you to pay attention and intervene at all times - or you shouldn't be a driver anymore, but a passenger. Today's 'It mostly works, except occasionally when it doesn't' is terrifying.


This "driver aid" model itself is starting to sound like a problem to me. You either have safe, autonomous driving or you don't.

A model where a driver is assumed to disengage attention, etc but then be expected to rengage in a fraction of a second to respond to an anomalous event is fundamentally at its core flawed I think. It's like asking a human to drive and not drive at the same time. Most driving laws assume a driver should be alert and at the wheel; this is what...? Assuming you're not alert and at the wheel?

As you're pointing out, this leads to a convenient out legally for the manufacturer, who can just say "you weren't using it correctly."

I fail to see the point of autopilot at all if you're supposed to be able to correct it at any instant in real-world driving conditions.


It's only a problem if you believe in driverless cars, then it becomes a Hard Problem: "it works in situations where it's irrelevant", but so does plain old not holding the wheel: look, it's self-driving!* (in ideal conditions)

Reminds me of Bart Simpson engaging cruise control assuming its something like an autopilot. Goes good for a little while haha.

> I fail to see the point of autopilot at all if you're supposed to be able to correct it at any instant in real-world driving conditions.

-The cynic in me suggests we need autopilot as a testbed on the way to the holy grail of Level 5 autonomous vehicles.

The engineer in me fears that problem may be a tad too difficult to solve given existing infrastructure - that is, we'd probably need to retrofit all sorts of sensors and beacons and whatnot to roads in order to help the vehicles travelling on it.


Road sensors ain't gonna fix the long tail of L5. We can't even upkeep roads as is, like crash attenuators, which would have mitigated the fatality in OP article.

Also, highway lane splits are very dangerous in general. It's a concrete spear with 70mph cars whizzing right towards it. Around here, they just use barrels of material, sand I believe. Somebody crashes into one, they clean the wreck, and lug out some more sand barrels. Easy and quick.


It isn't the SOLE action for L5 to be feasible, but I believe it is a REQUIRED action. (Emphasis added not to insinuate you'd need it, but rather to show, well, my emphasis. :))

For the foreseeable future, there's simply too many variables outside autopilot manufacturers' control; I cannot see how car-borne sensors alone will be able to provide the level of confidence needed to do L5 safely.

Oh, and a mix of self-driving and bipedal, carbon-based-driven ones on the roads does not do anything to make it simpler, as those bipedal, carbon-based drivers tend to do unpredictable things every now and then. It'll probably be easier when (if) all cars are L5.


I see this stated often, that humans are unpredictable drivers. What's the proof that automated systems will be predictable? They too will be dealing with a huge number of variables, and trying to interpret things like intent etc.

Yes, automated systems will also do unpredictable things - the point I was (poorly, as it were) trying to make was that the mix of autopilots and humans are likely to create new problems; without being able to dig it out now, I remember a study which found that humans had problems interacting with autonomous vehicles as the latter never fudged their way through traffic like a human would - say, approaching a traffic light, noting it turned yellow - then coming to a hard stop, whereas a human driver would likely just scoot through the intersection on yellow. Result - autonomous vehicles got rear-ended much more frequently than normal ones.

So - humans need to adapt to new behaviour from other vehicles on the road.

When ALL vehicles are L5, though, they (hopefully) will all obey the same rules and be able to communicate intent and negotiate who goes where when /prior/ to occupying the same space at the same time...


I think that unless a single form of AI is dictated for all vehicles, we can't safely make the assumption that autonomous vehicles will obey the same rules. Hell, we can't even get computer to obey the same rules now, either programmatically or at a physic level.

-That is a very valid point.

And, of course, they should all obey the same rules (well, traffic regulations being one, but also how they handle the unexpected - it would be a tough sell for a manufacturer who rather damaged the vehicle than other objects in the vicinity in the event of pending collision if other manufacturers didn't follow suit...

Autonomous Mad Max-style vehicles probably isn't a good thing. :/


Which is why most car companies long ago said they wanted to skip level 3 and go direct to level 4. With level 4 when the car can't drive it will stop and give the human plenty of time to take over.

The weird thing is there seems to be a discrepancy between these publicized figures of millions of miles of auto-pilot on the roads, and the general feeling you get when you turn on the system yourself. I've used it on a Model 3 and it at least feels horribly insecure, the lines used to show detection of the curbs are far from stable and jitters around often, maybe it's more safe than it seems, but the feeling is I would absolutely not put my life in the hands of such a system... just looking at all the YouTube videos of enthusiasts driving around the countryside with autopilot and it's like watching a game of Russian roulette. Suddenly the car starts driving along the other side of the road or veer off into a house.. I would categorize it as a glorified lane-assist system, in its current state.

Even Tesla's marketing copy describes it that way, so I don't think you are to far off.

>Autopilot enables your car to steer, accelerate and brake automatically within its lane.

>Current Autopilot features require active driver supervision and do not make the vehicle autonomous.


> And yet it's literally in cars on the road.

It is not. There is no real self-driving on the road, at least not in conventional vehicles. Teslas autopilot is basically a collection of conventional assistive systems that work under specific circumstances. Granted, the circumstances where it works are much broader than the ones defined by the competition, but for a practical use-case its still very restricted. Self-driving systems can be affected by very minor changes in lighting, weather and other circumstances. While Teslas stats on x Million miles driven under Autopilot are impressive, they do not show the real capabilities of the self-driving system. For example, you can only enable the Autopilot under specific circumstance for example while driving on an Autobahn with clear weather. Under circumstances with for example limited sight the Autopilot won't turn on or will hand over to the driver, simply because it would fail. Of course, this is for passenger security, but these are situations real self-driving vehicles need to handle. Other leading projects like Waymo also test the vehicles under ideal circumstances with clear weather etc.

We'll most likely see fully self-driving vehicles in the future, but this future is probably not as close as Tesla PR makes us think.


> There is no real self-driving on the road

Emphasis on real. There is definitely something that most people would refer to as "self driving" in cars on the road.

I'm not saying what is there is specifically good at what it does - I'm saying someone put it into use regardless of how fit for purpose it is.

> but this future is probably not as close as Tesla PR makes us think

Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin".


> Emphasis on real. There is definitely something that most people would refer to as "self-driving" in cars on the road.

Then you'd have to define what a self-driving car actually means. At least for me, self-driving means level 4 upwards. Everything below I'd consider assisted driving.

> Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin". As I said, this Smart Summon feature also only works under very specific circumstances with multiple restrictions (and from what I've seen on Twitter it received mixed feedback)

Just because the car manages to navigate a parking lot with 5km/h relatively reliable, that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.

Edit: Fixed my formatting


> Then you'd have to define what a self-driving car actually means.

I said "for most people". For most people I know, a car that will change lanes, navigate freeways and even exit freeways is "self driving". It may be completely unreliable but even a toaster that always burns the bread is called a toaster: no one says "you need to define what a toaster means to you".

> At least for me, self-driving means level 4 upwards.

I have literally zero clue what the first three "levels" are or what "level 4" means, and I'd wager 99% of people who buy cars wouldn't either.

> that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.

30 seconds found me a video on youtube of some British guy using autopilot at 150kph on a German autobahn last June.

Again: I'm not suggesting that it is a reliable "self driving car". I'm suggesting that it is sold and perceived as a car that can drive itself, in spite of a laundry list of caveats and restrictions.


>It may be completely unreliable but even a toaster that always burns the bread is called a toaster

This argument is leaning toward the ridiculous.

I think only you and Elon Musk consider a "greater than zero chance of making it to your destination without intervention" to be self-driving.


And I think you're being ridiculously pedantic if you think that a list of caveats and asterisks in the fine print means that average Joe B. Motorist doesn't view the Autopilot/Summon/etc features as some degree of "self driving".

Musk has good reason -- he's been selling an expensive "full self driving" package for a couple years and in order to deliver he needs to redefine the term. He's already working hard on that.

Car engineers know what level 1-5 is. Level 1 and 2 are basic assist - cruise control and the like. Level 3 is the car drives but the driver monitors everything for issues the car can't detect. 4 and 5 are you can go to sleep, 4 means there are some situations the car will wake you up and after you get a coffee (ie there is no need for instant take over) you drive, while 5 is the car will drive anything.

BTW, I would very much like to see progress in optical flow because I could really use it for one of my projects.

If you know any good paper that tries a novel approach and doesn't just recycle the old SSIM+pyramid loss, please post the title or DOI :)


There is 40-50% drop in precision with state of the art results if test images are ill-formed. The imagenet dataset used is far from ready for real world use cases. A bunch of IBM and MIT researchers are trying to fix this - https://objectnet.dev/

As in, "only works in great visibility on a perfectly spherical road"? That does seem an appropriate summary.

> I don't know where you got these papers but you clearly got an awful sampling and I don't think you're giving computer vision a fair evaluation.

I disagree, I saw such an awful number of bugs in ML codes going with papers that I now take for granted that there is a bug somewhere and hope that it does not impact the concept being evaluated.

(here having everyone use python, a language that will try its best to run what you throw at him, feels like a very bad idea)


> Granted, we're not quite ready for self driving

if it was the case self driving cars wouldn't be on the road, I don't think we should aim for perfection, perfection will come. We should be looking for cars that make less errors on average than humans, once you have that you can start putting cars on the road and use data from the fleet to correct remaining errors.


Humans have an average of 1 fatality per 100 million miles driven. No one is anywhere close to 100 million miles without intervention.

Are there any fully autonomous cars on public roads with no driver that can intervene? Seems like only maybe in tightly constrained situations are we ready.

I don't think I mentioned FULLY autonomous cars, my point was: something doesn't have to be perfect before we have to use it, but I probably didn't express myself correctly

I think that the necessity of intervening drivers atm indicates that we aren't at that point yet, even if that point is far from perfection, and also that the reason any self-driving cars are on the road is because of the fairly loose but significant requirements from regulation. We might be at that point in otherwise very dangerous situations, like if I was very tired or drunk, but otherwise I don't know that I'd have so much faith in software engineers to completely control my car.

I mostly agree with you that there are many false positives in the research papers. Still, you shouldn't outright dismiss the possibility of you not implementing their models correctly.

I was reviewing the authors' own source codes.

It’s been a while for me, but I’m pretty sure there are techniques for approximating gradients for step functions, although the paper may not have mentioned them (perhaps for secret sauce)

In the case I evaluated, the authors later admitted to using supervised training for initializing their supposedly unsupervised learning methods.

Also, I was reviewing their public source code release and there was no approximation. That part of their loss function had simply never worked, but they had not noticed prior to the release of their paper.

And due to them training slightly different from what they described in the paper, the AI worked competitively well nonetheless.


Yes. Current self-driving might be safer than the avarage driver, but what most people forget is how unsafe it is when it fails. It fails harder than most avarage drivers and even bad drivers.

In my eyes self-driving should still be called driving-assistance.


Can you cite any sources that AI fails harder? I'm not exactly sure what you are referring to.

The dead Uber pedestrian where the AI dismissed the data as faulty and did not slow down at all.

I think every human would slow down if they see things that they cannot explain. An AI will not.

It's basically the same problem as when an image recognition AI is 99% sure that the photo of your cat shows guacamole.

Current AIs do not have a concept of absolute confidence, they only produce an estimate relative to the other possibilities that were available during training. That's why fully novel situations produce completely random results.


> dead Uber pedestrian

Elaine Herzberg was in dark clothing crossing a dark street well away from any crosswalk or street lighting. Would a human driver have performed better? From the footage I saw she was nearly invisible, I would have hit her too.

https://www.youtube.com/watch?v=ufNNuafuU7M

This was not a hard fail for the AI.


The exposure time and the dynamic range of the sensor affects the visibility of the person in that video - it is very likely that a non-distracted human would have performed better.

The vehicle was equipped with both radar and lidar. The victim was detected as an unknown object six seconds prior to impact, and at 1.3 seconds prior to impact when the victim entered the road the system determined that emergency braking by the driver was required, however the (distracted) driver was not alerted of this.


> the system determined that emergency braking by the driver was required, however the (distracted) driver was not alerted of this.

Why would the system notify the driver that emergency braking was required instead of simply braking?


"the nascent computerized braking technology was disabled the day of the crash"

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg


The footage shows how car's cameras saw the accident. Human eyes have much greater dynamic range than that, so I wouldn't assume that a human driver would not perform better. Something appearing pitch black on this footage could be well recognisable to you. Also, this car's lidar also failed to recognise the pedestrian so if this isn't an AI fail then I don't know what is.

This is absolutely a fail, a woman died. The question is whether or not this incident is an example of "fails harder than most average drivers" or a hard fail.

With the sensor array available to it the car should have done better, no question.

But to make the claim "fails harder" I would be looking for a clear cut situation where a human would almost definitely have outperformed the AI.

Human eyes do have miraculous dynamic range so we would likely see more. Can we say with 90% certainty that a human would have saved the situation?


"There's something out in front of me, an unclear shape right in my path, relatively far, 6 seconds out. I will drive straight through it instead of slowing down, because...[fill in your Voight-Kampff test response]". Well? Is that at least 90% human?

Moreover, try this dashcam video: https://youtu.be/typj1asf1EM It's 10 seconds long, and makes the pedestrian look almost invisible except for the soles.

However, when I took that video, both the crossing pedestrians were clearly visible, not vague shapes that you only notice when you're told they exist. So much for video feed fidelity.


Can we say with 90% certainty that a human would have saved the situation?

Yes, given the misleading nature of the dashcam video I think we can. This was not a pitch dark road lit only by headlights where an obstacle "appeared out of no-where". This was a well-lit main street, with good visibility in all directions. An ordinary human driver would have had no problem identifying Elaine as a hazard and taking the appropriate avoiding action, which was simply to slow down sufficiently to allow her to cross the road.

The backup driver in the car was apparently looking at their phone or some other device and not watching the road at the time.


Based on the evidence you have put forth your conclusion is logical and reasonable. You have convinced me that my statement was in error.

> The backup driver in the car was apparently looking at their phone or some other device and not watching the road at the time.

"According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review"

She was looking at a device, yes, but not her phone.

Uber put one person on a two person job, with predictable results.


After the crash, police obtained search warrants for Vasquez's cellphones as well as records from the video streaming services Netflix, YouTube, and Hulu. The investigation concluded that because the data showed she was streaming The Voice over Hulu at the time of the collision, and the driver-facing camera in the Volvo showed "her face appears to react and show a smirk or laugh at various points during the time she is looking down", Vasquez may have been distracted from her primary job of monitoring road and vehicle conditions. Tempe police concluded the crash was "entirely avoidable" and faulted Vasquez for her "disregard for assigned job function to intervene in a hazardous situation".

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg#Distr...


Fair enough, you're right, she was likely looking at her phone.

The rest of my point still stands though.


This isn't one of the actual driving cameras, this is a shitty dashcam with variable exposure, with the footage then heavily compressed. This is not used at all for the self driving.

Just adding this if you think we should somehow give Uber the benefit of the doubt here. They released footage from a pinhole dashcam sensor that is not used by the system, knowing fully well it would be pitch black and send the ignorant masses into a "she came out of nowhere!" chant.


LIDAR would have picked that up dead easily.

Just like LIDAR would have picked up https://www.extremetech.com/extreme/297901-ntsb-autopilot-de...

And just like LIDAR would have picked up https://youtu.be/-2ml6sjk_8c?t=17

And just like LIDAR would have picked up https://youtu.be/fKyUqZDYwrU

And just like LIDAR would have picked up https://www.bbc.co.uk/news/technology-50713716

These accidents are 100% due to the decision to use a janky vision system to avoid spending $2000 on lidar; and that janky vision system failing.


"Brad Templeton, who provided consulting for autonomous driving competitor Waymo, noted the car was equipped with advanced sensors, including radar and LiDAR, which would not have been affected by the darkness."

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

The car had LIDAR.


It had lidar and ignored an unknown object it was tracking. If that's not damning for the whole field, I don't know what is.

Yep, and it detected Herzberg in the roadway with plenty of time to spare.

"the car detected the pedestrian as early as 6 seconds before the crash" [...] "Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words, to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.”" [1]

'Oh well it was very dark' is not a factor in the crash that killed Herzberg

[1] https://techcrunch.com/2018/05/24/uber-in-fatal-crash-detect...


It’s $75,000 USD for the LIDAR sensor, a far cry from $2,000.

I think that was the price for Velodyne's 64 laser LIDAR. They've discontinued it and now the top of the line is the Alpha Prime VLS-128 which has 128 lasers and is ~$100K.

There are many other cheaper LIDARs, even in the Velodyne lineup, but they are less capable.


The car had LIDAR. It wasn't an equipment failure, it was a failure on the part of the programmers. They had programmed in a hard delay to prevent erratic breaking and the system was programmed to dump data whenever an object was re-categorized. The system detected the person in the road as an unknown obstruction and properly detected that it was moving into the road but it re-categorized that obstruction 4 times before correctly identifying it as a person. By that point, the velocity and LIDAR data had been dumped because of the re-classifications and the car only had <2 seconds to stop.

Seriously, that is hopeless. It is worse than I'd expect from a university project.

Nice story...if only it would match the evidence. The algorithm has detected the person 6 seconds before the crash, but as it didn't match any of the object classes conclusively, it took no action. You read that right: "there's something at 12 o'clock. Bike? Person? Unknown? Who cares, proceed through it!" If that's not a hard fail, IDK what would.

That's not what happened. The AI was not programmed to drive through anything. It was incorrectly programmed to dump previous data when the categorization changed. It correctly identified at each point that it was meant to slow down and/or stop but, by the time it determined what the obstacle was, the previous data was thrown out and it didn't have enough time to stop properly. In your example, it was more like "There's something that 12 o'clock. Bike? Person? Unknown? Stop!!" just before actually hitting the person.

The car did "see" an obstacle for over 6 seconds and did not brake for it, now someone is dead. You are haggling over semantics to make it look like this did not happen and/or this is not a bug. Atrocious.

(Or, more charitably, "oops, somebody forgot that object persistence is a thing" does not excuse the result)


What? That's not at all what I'm doing and you're being extremely disingenuous to suggest that. I'm simply correcting misinformation. The car wasn't programmed to drive through anything. It was programmed to throw away information. Either way, it's an atrocious mistake and I've even said, elsewhere in these comments, that the people responsible for that code should be held liable for criminal negligence. There's no need to lie about my point or my position to defend yourself. That's just silly.

I have misunderstood you then, and I apologize.

Then I forgive you and I'm glad we see eye-to-eye on this. Everyone should be appalled at Uber's role in this and their response along with the lack of repercussions for them.

The video released by Uber was extremely misleading. Here is a video on YouTube of the same stretch of road taken by someone with an ordinary phone camera shortly after Elaine’s death: https://www.youtube.com/watch?v=CRW0q8i3u6E

It’s clear that a) the road is well lit and b) visibility is far, far better than the Uber video would suggest.

An ordinary human driver would have seen Elaine & taken evasive action. This death is absolutely Uber’s responsibility.


> An ordinary human driver would have seen Elaine & taken evasive action.

Looks like this was a hard fail for the AI then. We can say with better than 90% certainty that a human would have saved the situation, probably would have stopped or avoided easily. My mistake.


"May have seen" is more appropriate as every day pedestrian get killed on well lit road by human drivers.

The assumption was for an ordinary driver, the expectation is that given sufficient lighting the vast majority of drivers would see and avoid a pedestrian. Most of the millions of pedestrian vehicle interactions daily go by without incident, one or the other party giving way, so this would be the normal expectation for an ordinary driver.

We can reasonably assume that pja is aware of the existence of abysmal drivers and fatal crashes that should not have happened. I doubt their intent was for "would" to be interpreted as "100%".


Which is also true. This is perhaps the underlying issue: "we expect cars to be safe, while also expecting driving fast in inherently unsafe conditions." In other words, the actual driving risk appetite is atrocious, but nobody's willing to admit it when human drivers are in the equation. SDVs are disruptive to this open secret.

Some humans drive blackout drunk. You overestimate our competence.

Well for example all the YouTube videos that show how Tesla drivers take over the wheel in case of a failure.

Those failures include driving straight into barriers.

Other systems like OpenPilot show the same.

When it fails you better take control of the wheel or you will crash hard.


For clarity, I personally mistrust AI driving and wouldn't be comfortable using it, but the question I have is more along the lines of - if you take the incidence of serious driver error (e.g texting and crashing, speeding and sliding off a road, falling asleep etc) does that happen less often than autopilot going nutso for the demographic driving it? Failing hard seems very possible for both, so stats backing up that AI fails hard regularly seem applicable.

I believe he's alluding to the nature of the accidents. They're high intensity events which are more likely to be fatal (speeds were ~70mph). They're not fender benders when the autopilot fails.

Auto-pilot that drives above the speed limit ought to indicate to you its scope.

Still seems like people treat auto-pilot like auto-drive and die as a result.

Thats not a tech fail imho


But isn't that what autopilot is used for? High speed Highway traffic? I don't trust AI cars yet but I'd like to know my instinct is true on this and not just my natural inclination to avoid unfamiliar tech.

These are high risk areas, if autopilot is "failing hard" with a regularity equal to or higher than normal than this would be good to demonstrate with stats. Guessing Tesla doesn't really release that info?


Not classifying a fire truck stopped on the highway, as something you need to avoid, is a good example.

https://www.extremetech.com/extreme/297901-ntsb-autopilot-de...


Why on earth am I getting downed so hard? It was a genuine question.

> Current self-driving might be safer than the avarage driver

Citation needed, as far as I know this is not true at all.


Obviously take with a grain of salt give the source but.. https://www.tesla.com/VehicleSafetyReport

"In the 4th quarter, we registered one accident for every 3.07 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.10 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.64 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles."

Yes, people disengage auto-pilot (or it does so itself), but it is at least plausible to say that in this like-for-like comparison of drivers and vehicles self-driving is at least comparable in safety.


I don't think it is at all reasonable to assume this. The autopilot disengagement is a major driver of "tesla autopilot safety".

I don't know what disengagement rate is for Tesla, but we do know that early 2019 waymo it was like roughly every 11k miles driven[0]. Given that waymo uses lidar/generally considered to be closer to actual autonomy than Tesla this speaks poorly of actual autopilot safety.

[0] - https://9to5google.com/2019/02/13/waymo-self-driving-disenga...


> Current self-driving might be safer than the avarage driver,

So far there isn't any evidence for this assumption.


If you compare the number of accident per km, I'm pretty sure the numbers for self-driving are much lower than human drivers.

When a self-driving car is involved in an accident, it's in the news all over the world.

Human drivers kill or get killed every day, in every country.


Humans have 1 fatality per 100 million miles, self driving is nowhere close to this.

The statistics for SDVs have an issue which might invalidate them: "fatality per miles driven" doesn't take disengagements into account: how do you even do that, meaningfully? "Fatality avoided because human driver stepped in - another triumph for autonomous driving"? That doesn't make much sense...

It makes sense in that, even in the real world, the real humans can be counted on to pay attention and intervene to some extent. That's an important counterpoint to the vivid thought experiment of "how could you possibly expect someone to pay attention after hundreds of hours of flawless machine operation?"

Unless you're talking about getting rid of the steering wheel and deploying the current system as Level 5. In that case, yes, interventions should count against it.


I think the statistics to compare self-driving miles vs human driven miles are quite tough to judge.

Tesla was criticized quite a bit at one point for comparing deaths per Autopilot mile to deaths per all motor vehicle miles. This was a bad comparison because motor vehicles included motorcycles, as well as older, poorly-maintained cars, etc.

Then Tesla released a comparison between Autopilot miles in Teslas and human-driven miles in Teslas where Autopilot was eligible to be engaged. This felt like a much more fair comparison, but Teslas are lenient about where Autopilot can be engaged - just because the car will allow it doesn't mean many people would choose to do so in that location, so there might be some bias towards "easier" locations where Autopilot is actually engaged. There's also the potential issue of Autopilot disengaging, and then being in an accident shortly afterwards.

This is morbid, but I also wonder about the number of suicides by car that are included in the overall auto fatality statistics. If someone has decided to end their life, a car might be the most convenient way (and it might appear accidental after the fact). That would drive up the deaths-per-mile stat for human drivers, but makes it tougher for me to decide which is safer - Autopilot driving or me driving?


So when I see multi-car pileups on the motorway with cars that are a torn up husk (actually I've seen totalled cars just on inner city junctions where the speed limit is supposedly 50 kph), that's failing less hard than self-driving?

I’m unconvinced by all of the AI hype. But I will say that just last week, a human-driven car snapped a pillar outside of my office and bent a bike rack into a mangled mess. The driver was unconscious, with a drug needle and empty beer can on the passenger seat.

So, the hard failure for humans is pretty bad, too, just different. I suspect there’s little overlap on a Venn diagram of the hard failure modes for AI and humans.


Yeah I have a similar feeling...but I am willing to try it. Just aware that there are probably corner cases it can't handle and generally curious about what the behavior is.

For instance, I was driving on autopilot on a section of 101 where they repainted the lanes. I let autopilot do its thing...but I closely observed and kept my hand on the wheel and foot on the brakes. Lo and behold...the car positioned itself right in the shoulder and was driving towards no-man's land. If someone wasn't paying attention in this situation, it would have been catastrophic.


Does it not feel a little crazy to you that you're willing to put your life into the hands of a machine that you know is capable of, and has already tried, killing you?

I can't think of any time in my life where I've almost driven my car into a no-man's land even after thousands of hours of driving, and yet every Tesla owner I know has half a dozen stories about the time Auto Pilot tried to steer them into a barrier or took a corner way too fast or suddenly braked for no reason.

I totally get the appeal of a Tesla, holy shit that acceleration is amazing, but Auto Pilot just does not seem worth it at all.


As a driver of many decades, and having survived quite a few of my own stupid mistakes, I think this is also bias. As long as I had some direct input into the situation, I saw such near-misses as the inherent risks of driving: I should have had the wheels geometry checked, I should have replaced the lightbulb, I shouldn't have driven on old tires, I should have checked the mirror, I shouldn't have overtaken there - and I continue to use cars, despite dozens, perhaps hundreds of near-misses, or even minor accidents: "yeah, we almost died today - but didn't, that's just a fact of driving, not a suicide attempt".

Whereas once you feel that all agency is stripped from you, it's all someone else's fault, especially if the mistakes feel alien (as in "no human would err in this specific manner").


Same here, always get those puzzled looks from non-technological friends when I am so picky about new tech.

I would almost agree with you, but please indulge with me in the following thought.

Let's say that you can go to work by feet or by bike.

Let's also say that by feet your probability of getting killed are 1 in 1,000,000, and the commute takes 20 minutes.

On a bike, commuting takes only 10 minutes, but the probability is up to 1 in 200,000. Five times more.

You end up deciding to use the bike every day to go to work.

In this example, you decided to trade comfort (faster commute) with a slightly higher probability to end up dead.

Imagine now you need to decide whether to commute in your Tesla with or without autopilot.

Let's assume (I might be wrong) that Tesla's autopilot increases your chances of getting killed. (for simplicity, let's ignore the consequences for other people on the road).

Would you still trade comfort (not needing to drive) with a slightly higher probability to die?


These sorts of analyses get complicated fast. Walking is riskier in part because you're more exposed to the "killer self-driving cars", but also people with preconditions will walk. Like, if you're drunk you'll walk home, if you're unfit you're more likely to walk than cycle, etc., people walking probably get mugged more often, ...

But OP is making the opposite assumption here - that cycling is riskier. Which, keeping all other factors constant, seems intuitively correct to me since you go at a higher speed and you are on the road with cars rather than on the sidewalk.

As you note, you're coming from impossible premises ("assuming your actions never affect anyone else"). Do you expect to come to meaningful conclusions?

In other words, as a buyer, I do care about the occupant safety rating, you are correct in that. As a road user, I also care that other people's cars consider me as irrelevant, my potential death an externality to be amortized in the purchase prize.


Papers code is probably optimized for "first to publish". Also overfit in a non-traditional sense since ppl wanta to beat SOTA by as much as possible. Also the heuristic tips an tricks and autotuning you'd want in a production models would exceed paper lenth 10x. Also the author is motivated to NOT provide a bug free easy to put in production version of the code since that would lower the $ value of their expertise. A cocktail of all the wrong incentives!

Probably the production versions of those models are suboptimal in different ways but work better in practice...


This whole self driving, fully autonomous thing has become some sort of strange faux-religion.

It's followers see no other possible solution as acceptable.

Instead of putting our heads together to make really good driver assistance technologies and being satisfied, the darn thing needs to also drive itself everywhere otherwise we're leaving something on the table! Untill we have zero deaths, we cannot stop demanding self driving - as if somehow AI is a magical solution that's always better than humans.

Forget better frame, chassis, and body panel design to protect pedestrians! We won't need them if AI never hits anyone.

Forget better braking systems that apply themselves automatically. We don't need that if AI can always avoid the need for sudden stops.

Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.

No, forget it all. My car needs to drive me all by itself, everywhere I need to go, no matter how hard or impossible of a problem that is. Driver assist is boring... Level 5 is sexy, and that's what I want!


Comma.ai kind of follows the "improving lane detection until it drives you completely" line, at least according to what I get from this interview with Hotz: https://lexfridman.com/george-hotz/

He's still a bit too optimistic for my taste.


I bought in to Comma.ai at first and even started porting my car over. But I quickly realized that the whole comma.ai crowd is way way to cavalier with safety. One day I was driving down the freeway and saw a small fender bender occur in front of me. I looked up at my dash and it was all ripped up with wires hanging everywhere because I was working on the Comma.ai stuff. If it had been me in that fender bender you know that insurance would definitely blame me for that given that I was messing with the OBDII and radar sensors.

As cool as Comma.ai is, I really believe that their approach to allowing so much community involvement with little to no oversight is highly irresponsible.

That being said.... If they do succeed, and get some sort of government approval or oversight.... You bet I'm putting that stuff back in. Its cool A.F.


IMO, flying cars will eventually arrive.

Perhaps it will be limited to specific "lanes" (it will be much more palatable to the masses if it must (like cars) keep to a limited area). But it will not need to recognize pedestrians and bikers and human driven cars, and some standard will be introduced to allow all them self driving cars to talk to each other.

At that point, Level 5 will be much easier, even obvious. All he effort invested in assistive driving will seem silly.


I wonder if it's possible to solve noise problem with flying cars. Maybe use active noise cancellation at the source?

No idea if it's even theoretically possible, but would be neat.


Aren't flying cars going to be much noisier than surface cars?

Yes, that was my point. We will need some clever tech to make the noise bearable.

Joby is using some low noise props that are very effective. Quieter than most of the ridiculous motorcycles and loud cars on the 101.

If we care about noise, we should be addressing the motorcycle industry. I can’t hear a C-130 flying overhead at 1000 feet when I am in my house, but I can hear motorcycles zipping by on the freeway behind my house.


No, it's not physically possible to do any significant active cancellation of the noise caused by a rotor / propeller spinning through the air. The noise sources are too spread out (especially with multi-rotor designs) and cover too much of the audible range.

Blade shape can be tuned to an extent for minimizing noise but that also reduces efficiency (not much to spare). Larger, slower turning blades are also quieter but there are practical physical limits on size and weight.


>IMO, flying cars will eventually arrive.

Not so sure. I saw specific designs hyped in the 80s and 90s, and know of efforts hyped in the 70s as well. The reason they always fail is because constraints of designing a car to go on land and the constraints of designing a flying machine are different. You can sorta kinda build something that does both, and people have, time and again, but it will be good at neither of those things. Well, unless we develop "antigravity" or something...

Add the possibility of huge damage caused from a failing/failing/crashing flying car, not just to the road and other cars there (like with a car) but to any building, group of people, etc. If it was a car-replacement (and thus getting on it was laxer than flying a plane, with flight plans, airport checks, special licenses), it would also be perfect for suicide terrorism too!

Here's a funny but insightful post I've found, hammering on the topic:

Listen to most discussions of flying cars on the privileged end of the geekoisie and you can count on hearing a very familiar sort of rhetoric endlessly rehashed. Flying cars first appeared in science fiction—everyone agrees with that—and now that we have really advanced technology, we ought to be able to make flying cars. QED! The thing that’s left out of most of these bursts of gizmocentric cheerleading is that we’ve had flying cars for more than a century now, we know exactly how well they work, and—ahem—that’s the reason nobody drives flying cars.

Let’s glance back at a little history, always the best response to this kind of futuristic cluelessness. The first actual flying car anyone seems to have built was the Curtiss Autoplane, which was designed and built by aviation pioneer Glen Curtiss and debuted at the Pan-American Aeronautical Exposition in 1917. It was cutting-edge technology for the time, with plastic windows and a cabin heater. It never went into production, since the resources it would have used got commandeered when the US entered the First World War a few months later, and by the time the war was over Curtiss apparently had second thoughts about his invention and put his considerable talents to other uses.

There were plenty of other inventors ready to step into the gap, though, and a steady stream of flying cars took to the roads and the skies in the years thereafter. The following are just a few of the examples. The Waterman Arrowbile on the left, invented by the delightfully named Waldo Waterman, took wing in 1937; it was a converted Studebaker car—a powerhouse back in the days when a 100-hp engine was a big deal. Five of them were built.

During the postwar technology boom in the US, Consolidated Vultee, one of the big aerospace firms of that time, built and tested the ConVairCar model 118 on the right in 1947, with an eye to the upper end of the consumer market; the inventor was Theodore Hall. There was only one experimental model built, and it flew precisely once.

The Aero-Car on the left had its first test flights in 1966. Designed by inventor Moulton Taylor, it was the most successful of the flying cars, and is apparently the only one of the older models that still exists in flyable condition. It was designed so that the wings and tail could be detached by one not particularly muscular person, and turned into a trailer that could be hauled behind the body for on-road use. Six were built.

Most recently, the Terrafugia on the right managed a test flight all of eight minutes long in 2009; the firm is still trying to make their creation meet FAA regulations, but the latest press releases insist stoutly that deliveries will begin in two years. If you’re interested, you can order one now for a mere US$196,000.00, cash up front, for delivery at some as yet undetermined point in the future.Any automotive engineer can tell you that there are certain things that make for good car design. Any aeronautical engineer can tell you that there are certain things that make for good aircraft design. It so happens that by and large, as a result of those pesky little annoyances called the laws of physics, the things that make a good car make a bad plane, and vice versa. To cite only one of many examples, a car engine needs torque to handle hills and provide traction at slow speeds, an airplane engine needs high speed to maximize propeller efficiency, and torque and speed are opposites: you can design your engine to have a lot of one and a little of the other or vice versa, or you can end up in the middle with inadequate torque for your wheels and inadequate speed for your propeller. There are dozens of such tradeoffs, and a flying car inevitably ends up stuck in the unsatisfactory middle.

Thus what you get with a flying car is a lousy car that’s also a lousy airplane, for a price so high that you could use the same money to buy a good car, a good airplane, and a really nice sailboat or two into the bargain. That’s why we don’t have flying cars. It’s not that nobody’s built one; it’s that people have been building them for more than a century and learning, or rather not learning, the obvious lesson taught by them. What’s more, as the meme above hints, the problems with flying cars won’t be fixed by one more round of technological advancement, or a hundred more rounds, because those problems are hardwired into the physical realities with which flying cars have to contend. One of the great unlearned lessons of our time is that a bad idea doesn’t become a good idea just because someone comes up with some new bit of technology to enable it.

When people insist that we’ll have flying cars sometime very soon, in other words, they’re more than a century behind the times. We’ve had flying cars since 1917. The reason that everybody isn’t zooming around on flying cars today isn’t that they don’t exist. The reason that everybody isn’t zooming around on flying cars today is that flying cars are a really dumb idea, for the same reason that it’s a really dumb idea to try to run a marathon and have hot sex at the same time.

https://www.ecosophia.net/progress-and-amnesia/


Agreed on flying cars, but I wanted to point out a glaring inaccuracy in the opening that undercuts the author's broader argument:

"By 2000 or so that curve had flattened out in the usual way as PV cells became a mature technology, and almost two decades of further experience with them has sorted out what they can do and what they can’t."

In fact, PV prices have dropped DRAMATICALLY since 2000 (https://www.sciencedirect.com/science/article/abs/pii/S03014... looks at the different trends 1986-2001 and 2001-2012), as have the prices/performance of the energy storage systems needed to make them practical.

I agree that it's not a silver bullet that will solve the fossil fuel crisis all on its own (at least not in time), but it is in line with the broader improvement in renewable costs and efficiency.


>In fact, PV prices have dropped DRAMATICALLY since 2000 (https://www.sciencedirect.com/science/article/abs/pii/S03014.... looks at the different trends 1986-2001 and 2001-2012), as have the prices/performance of the energy storage systems needed to make them practical.

Can't open this, but the abstract also shares this:

"Market-stimulating policies were responsible for a large share of PV's cost decline"

This part is artificial though (subsidies, etc).


The subsidies created the scale and experience required to lower the costs; they're not included in the cost numbers. With current technology, they are cheaper without subsidies.

I read that in various places, but I'm still suspicious. There are lots of ways to hide subsidies (green tax cuts for example).

> To cite only one of many examples, a car engine needs torque to handle hills and provide traction at slow speeds, an airplane engine needs high speed to maximize propeller efficiency, and torque and speed are opposites

You can get around that by using an electric transmission. A turbine drives an alternator which drives 2 sets of electric motors one for the wheels and one for the propellors. As to the rest of the post it’s attacking a straw man. I don’t think people want a highway capable car that can also fly. If you can fly why drive on the highway?

A 50k to 100k VTL ‘flying car’ with maximum cruse speed of 80 MPH, maximum altitude of 10,000 feet, a range of 500 miles, room for 2+ people, and a cargo capacity of 1,000lb including people fits most people’s definitions of a flying car. Being able to move around on the ground at say 15 to 25MPH without giant spinning blades would also be a great feature.

Oddly enough I think we already have something close to flying motorcycles in autogyros, but the closest thing to a flying car is a vanilla small flying airplane and those run you 250k new.

PS: There is even something of a jet pack alternative https://www.youtube.com/watch?v=bpwd-T2Qvbk


The question is whether the problem is one of insufficient engineering optimization or whether it requires a step-function in technology that does not exist now. It appears to me that self-driving cars are of the former type, while flying cars are the latter.

Current-resolution lidar, cameras, and radar seem to provide sufficient sensor input. The costs are too high by a long shot, but that may just be a question of getting economies of scale established. Current PC graphics hardware has sufficient bandwidth to process those sensors. I don't think you can just throw current neural net training at the problem and get Type 5 autonomy out of it - there will be lots and lots of engineering hours in figuring out what to do with that sensor data - but that's just a problem of doing many man-years of straightforward work.

Flying cars don't have adequate power from a current-gen internal combustion engine running on petroleum, and especially not enough power from lithium-ion batteries and electric motors. If you could get a power source that provided an order of magnitude or two greater power density than the best of those technologies, flying cars would be viable. Until then, no amount of engineering hours will make it work.


> If you could get a power source that provided an order of magnitude or two greater power density than the best of those technologies, flying cars would be viable.

So essentially thousands of flying nuclear reactors piloted by average joes around the city.

That sounds really safe!


"But it will not need to recognize pedestrians and bikers and human driven cars"

If they want to land at some point presumably they have to avoid landing on these things?


As well as trees, birds, powerlines, antennae, and even locusts!

Flying cars might or might not happen (see other discussions here). However they will never be more than a niche. Airplanes need far more safety space than cars. For a car you need a few dozen meters to the cars front and back, and less than 1 side to side. For airplanes it is thousands of meters in all directions, and you are limited as to height before you run out of atmosphere. So even thought planes run in 3d space, there is in practice less space for them than the few roads in a city.

Humans and bikes need even less safety space than cars.

I don't think safety space is a good criterion for predicting how widespread a means of transportation will become.

A fast mode of transportation, with a large safety space requirement, may be more efficient than other modes and/or become popular.


The point is there isn't enough space for all the [single occupancy] cars to become airplanes. Speed doesn't change that because speed increases the space needed between airplanes.

Now if we change to flying buses we might be able to pull something off, an express bus that picks up people for a few stops in the suburb for 10 minutes, then flys at >150mph downtown is a very compelling competitor to a car and will get anybody who currently drives 20 to minutes downtown to ride if the cost is reasonable (those in closer in suburbs will still drive). I don't think the business or environmental models work out.


> This whole self driving, fully autonomous thing has become some sort of strange faux-religion.

What really bugs me, as someone working in consulting as a hands-on backend developer with a previous background in technical consulting at an accounting Big4 (student job), is the insane amount of PR, politics and marketing talk by people who have ABSOLUTELY NO IDEA what they are talking about. I witnessed politicians and C-level industry people talking out of their asses to ... idk ... drive stocks? Look smart in the face of Tesla? PR? No clue.

Self-proclaimed experts in magazines and talk shows raving about how AI is going to change everything. I had colleagues telling customers about the magnificent rise of AI and none of them could even spell "gradient descent". Backed by a law or accounting degree they KNOW that self-driving is just around the corner and they are very vocal about it while easily impressed by tightly controlled demos at some international tech fair. Everyone just seems to fall into the hype trap without a single brain-byte spent on researching the actual issues and what's most sad is actual engineers/technical people not doing their due diligence and informing themselves BUT THEN GOING OUT TO TELL THEIR NON-TECHNICAL FRIENDS ABOUT THE AUTONOMOUS FUTURE. Ugh.

I held a minor internal talk at work about self-driving cars for people generally with other backgrounds but light superficial interest due to "Tesla" and "the hype". They were surprised to see that we are likely decades away from actual Level 4 (not the marketing garbage that some companies put out) because even a slight change in weather can really fuck with all systems on the roads right now.


Yeah but speaking as someone kinda on the other side there are things like this 'actual Level 4' delivering stuff for real in Wuhan https://kr-asia.com/jd-com-uses-l4-autonomous-driving-soluti...

Fair enough it's not very good - that one just went 600m - but it's hard to argue it won't exist for decades when it exists now.

And historically going form sorta works but is rubbish in info tech eg. early cellphones, internet and so on - to works well doesn't seem to take that long. Five to ten years perhaps typically.


The issue is that there are so, so many edge cases to worry about. Simple example: someone decides to troll you in your self-driving car and steps out in front of it. You don't have controls in the car, so you can't go around them - you have to wait for them to move.

In reality, it seems like it would resolve quickly - you get out and yell at them, call the police if that doesn't work, etc. But it can get more sinister - criminals _already_ block the road to force drivers out of their car to rob them[1][2]. Now, if you know that might happen, you can just drive around the obstacle. Unless, of course, you're in a self-driving car where you might not even be able to get it to do a u-turn. Related issues would be areas where the practical advice is "don't stop" - not even at red lights - if you're there late at night due to the risk of car jacking[3][4] (this might be out-of-date now, to be fair). Can rules like that be encoded into a self-driving car?

OK, yes, you probably could find a way to do it. But that's almost certainly just the tip of the iceberg in terms of "ways people will fuck with self-driving cars" and "things people do that are technically illegal but still safer than the alternative." Could you solve enough of those in 5-10 years, _on top of_ making self-driving cars work in sun/rain/snow/fog/night/tornados/etc safely and consistently? I think that's very unlikely. Decades seems far more likely to me.

[1] https://abc7chicago.com/593111/

[2] (Non-EU only) https://wgntv.com/2015/03/31/robbers-set-up-fake-road-blocks...

[3] https://eu.detroitnews.com/story/news/local/detroit-city/201...

[4] https://www.reddit.com/r/AskLEO/comments/2rzsdz/are_there_an...


Have you seen the roads in Alabama? They have this weird, red asphalt and rarely any edge lines. In Louisiana they have these elevated roads over the bayou that don't seem to have break-down lanes. In Tennessee they have roads that go through mountains with shocking curves and gradients. Pot holes, broken lights & signs, weird parking lots, new construction, and seeing a small ball roll near the road and knowing that a kid might be coming after it soon.

It seems like it would take an endless list to cover every new edge case. Our technology is a amazing, but I almost think the edge case is places where autonomous driving makes sense.

The thing I trust the least is the operators they want to put in these cars. They better be completely autonomous, self-maintaining, and somehow tamper-proof. It's a really tall order, which I hope we do fulfill one day. But maybe I'm a pessimist, and they have it all figured out already.


Re robbers, car jackers and the like in some ways you may be better with a self driving car as they seem to be covered with cameras and report back to base the whole time so the crooks would be photographed and the cops called. Already the Tesla cameras have caught a few https://www.youtube.com/watch?v=JqBWt9rRx-U

Re all the edge cases - yeah that'll take a while.


License plate can be covered (or more often - using stolen car) and balaclavas cost just few dollars.

If it is know it is stolen a hi tech truck can be remotely ... of course there is another kind of risk. But if one just ensure only one guy can open the container or the container is fixed with pin certain distance from a driver etc.

Ther is always a way.


There was a movie which featured autonomously driving trucks, they were held up and stolen from by the bandits putting a cow in front of them, then just taking the cargo while it was stopped.

You can certainly have all the cameras that you need but if the bad guys have their faces covered and identifying marks hidden then you're not going to be able to do much.


> There was a movie...

I'm pretty sure there is an early scene in the movie Solar Crisis[1] that plays out similarly to what you're describing. This movie was on one of the cable movie channels when I was growing up, so I got a higher-than-normal dose of it.

I don't remember a cow, though (but then this probably isn't the only sci-fi movie out there with such a scene). I think one of the characters first parked a motorcycle in the road, but the truck plowed through it. After that, they stood in the road instead, and that caused the truck to screech to a stop right in front of them, blasting a message on a PA about how they were breaking the law by impeding it.

[1]: https://en.wikipedia.org/wiki/Solar_Crisis_(film)


Yes, that’s going to be very helpful to the investigators when they finally figured out what happened to the car and to your body. Might not be helpful to you, though....

There's no reason to believe that's a level 4 autonomous vehicle, other than the marketing release of the company that makes it.

Going from early cellphones to smartphones was an engineering problem. All the technology was already available and it was a case of putting it all together in a way that worked and that could be manufactured at scale and for profit.

With vehicle autonomy, the problem is that we don't know how to create autonomous AI agents yet, so we don't know how to make a car driven by such an agent. Claims of level 4 autonomy should, at this point, be treated like claims of teleportation or time-travelling back in time: theoretically possible but we don't know how to make it happen in practice.


Predicting with All-Caps confidence that autonomous driving is "decades away" is at least as indefensible as overly optimistic predictions were.

I wouldn't bet on it but I also wouldn't call it indefensible. Fully autonomous driving is a very complex problem with a very long tail. Being able to drive semi-reliably on American highways doesn't mean that you're almost done, not even close.

An other handicap for self-driving cars is that the problem is effectively harder at the start when the majority of the traffic will still be operated by human drivers who are a lot harder to predict reliably than an other autonomous vehicles.

Beyond that, I strongly believe that software engineering is still ridiculously immature and unable to deliver safe, reliable solutions without strong hardware failovers. We have countless examples of this. We simply don't have the maturity yet, we're still figuring out what type of screwdrivers we should use and whether a hammer could do the trick.


Having driven in Canadian winters, I honestly agree that reliable autonomous driving in inclement weather is indeed decades away.

The visual recognition needed is well beyond the systems today.


Isn't that just "image de-obfuscation" though? Seems like narrow AI will be able to out-class humans at that in no time. You can generate as much training data for that as you want. Doesn't really require human-type intelligence. Though I guess you might mean that the obfuscation makes the edge cases even harder, which makes sense.

I have a car (Honda Pilot) where the company decided to make the lift gate window too high, probably to accommodate mounting the spare tire inside the cabin. This design makes you dependent on the rear camera for most reverse use cases.

It probably made a lot of sense in the Southern California design center. In Upstate New York, that camera is covered in road spray and salt, and my brain cannot see anything or act effectively without cleaning it. Even after doing that, it will get dirty again after a few minutes of driving.

I’d guess that a least a few dozen people will hurt by this decision.

Take this problem to the self-driving car and things get even worse. You’re going to have a lot of problems with sensor effectiveness that cannot be magically fixed with software.


It sometimes seems as if half the purpose of assistive driving systems serves to compensate for the absolutely horrible sight lines in a lot of newer vehicles.

And I have heard rumors of lobbying going on to get the requirement for rear view mirrors dropped when video feeds are provided to replace the functionality.

Here's my go-to example about the challenges of driving in a Canadian winter.

I was waiting for my bus to work one morning after a large snowfall. The snow clearing crews were hard at work, but the street was effectively blocked by piles of snow, men, and machines.

Yet, my bus arrived on time *driving down the sidewalk".

I am not sure how any self-driving system could have figured that out :)


And if it did, hollow sidewalks are a thing in some places, so...

There's like fifty caveats that go along with this statement, but this is the internet so I'm just going to skip all that.

Something like half your human brain is devoted to visual processing.

There's a tendency to think that things like language is what makes the human brain special, or our ability to plan or think abstractly, and we talk about things like "eagle eyes", but the truth is humans are seeing machines with most everything else as an afterthought.

The reason your cat will attack paint spots on glass for hours and flips the hell out about laser pointers is because their visual systems are too simple to distinguish between those and the objects that actually interest them, like insects.

Vision is not the easy part of AI.


citation for "half your human brain is devoted to visual processing"?

Since the internet places no weight on things like "common knowledge to anyone in the field" or "I took a bunch of classes on the brain in college", here's a random quote from someone at MIT: http://news.mit.edu/1996/visualprocessing

> Vision is not the easy part of AI.

I think it is, actually. Going from raw pixels to objects is the (relatively speaking) easy part. It's the next part (using that for planning and common-sense reasoning) that's the hard part. Machine learning has already advanced past humans in this regard for many classes of problems - which is part of the reason why captchas are getting so hard.

This was several years ago, hence the move away from obfuscated text (which was getting harder and harder to read): https://spectrum.ieee.org/tech-talk/artificial-intelligence/...

I'd be surprised if basic perception tasks as human-ness tests last more than a few more years.


I think autonomous driving advocates would do well to look at the history of computer handwriting recognition, an easier technical problem with lower consequences that received significant investment over decades. But it has never gotten good enough to succeed in the marketplace against alternatives.

Why? It never exceeded consumer expectations, which are extremely high for automated systems. Even a correctness rate of 99.9% means multiple errors per day for most people. Consumers expected approximately zero errors, despite not being able to achieve that themselves, sometimes even with their own handwriting!

Because handwriting is made by humans, there is some percentage of it that simply cannot be reliably recognized at all. But people hold that against computers more than other people because computers are supposed to be labor saving devices.

Likewise because roads are made by people, and other cars are driven by people, so a self driving car will never be able to be perfectly safe. But that is essentially what advocates are promising.

That’s especially true if people expect the same level of convenience, especially in terms of time. People speed and take risks all the time when driving, in the name of saving time. I think it’s likely that an autonomous car optimized for safety would also be a car that just takes a lot longer to get anywhere with.

Speed matters. It’s a big reason we all use touch keyboards on our phones instead of handwriting recognition.


Handwriting recognition works very well if you can capture the actual strokes used while writing and not just the end result.

That's great, but I would wager that nearly 100% of all writing ever done in human history was done without capturing the strokes while writing. Therefore, while this added accuracy is great, it is virtually useless for most written work.

Isn't that a bit irrelevant? If we are talking about patterns that work well for the user, clearly writing everything traditionally and then going back and taking pictures of everything is a cumbersome process. Writing on iPad or similar is clearly the medium in which this shines, at which point you do capture the strokes.

That only works if you can assume that everybody using the system you're desiging has access to the underlying technology. Sure, if you're desiging some new system (like an autonomous vehicle on a closed loop, controlled system / system purpose built to perform digit recognition as it is written on it, but why wouldn't you just have the user directly input on a keypad) then you'll get a better result, but in the general, real world, case (autonomous vehicle on city streets with other vehicles / recognizing digits from scanned input without the stroke data) then your special case optimization are impossible and for all general practical purposes do not apply, so appealing to their assistance in increasing accuracy doesn't actually do anything to help the system perform better.

While that's true, having the ability to capture strokes now allows machine-learning models to better determine what potential strokes were used to make a specific shape. Just because we didn't have it for everything doesn't mean it's not useful for adding accuracy to the past.

I doubt it. My handwriting is at least average neatness, and stroke based recognition systems still make multiple errors per sentence. It's just a frustrating waste of time and now that we have touch screen keyboards there's no longer any point to handwriting recognition.

The only handwriting recognition system which ever worked correctly with a low error rate was Palm Graffiti. It forced the user to learn a new shorthand writing style designed specifically to avoid errors.

https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)


While we're sharing anecdotes, my handwriting is remarkably terrible, and the iPadOS Notes app does a good job of transcribing it.

I think this supports the grandparent's point about using the actual strokes, including angle and azimuth, to reconstruct intent.

I was also fairly proficient with Graffiti, back in the day, but I consider that an input method, not handwriting recognition. I was facile with T9 as well.


The secret to Palm Graffiti's market success was that it hacked user expectations.

Because it asked users to learn a new way of writing, when the recognition failed, users were more likely to blame themselves, like, "Oh, I must have not done that Graffiti letter right, I'll try again."

But when it came to recognizing regular (i.e. natural) handwriting, users believed inherently (i.e. somewhat unconsciously) that they already knew how to write, and the machine was new, so mistakes were the machine's fault.


So by analogy, autonomous driving will work very well if we can capture all the roads as they're being built?

Analyzing the individual strokes works flawlessly with Chinese and Japanese, where the stroke order is fixed (occasionally with a few variants). If you have the stroke information and the user writes correctly you can recognize characters that even humans would fail to read from the finished glyphs.

> "handwriting recognition... has never gotten good enough to succeed in the marketplace against alternatives."

There is little market demand for handwriting recognition, and thus little active research goes into it. Not because it is a difficult or problematic technology, but because better alternatives exist that make it irrelevant.

Even if someone were to come up with an absolutely perfect handwriting recognition system, most people wouldn't use it. Why? because the advent of multi-touch screens means that most people can type much faster than they can hand-write anyway.


This is all true today, but it was not true in the past. There was a time in the computing industry when everyone believed that pen interfaces with handwriting recognition would be a crucial enabler for highly mobile computing. Both Apple and Microsoft built major product launches around this idea in the early 1990s.

Oh, absolutely. I remember that era well. But I'm talking about today, of course.

What changed was that touch screens became better. The old capacitive touch screens were clunky, slow, inaccurate. You could put a keyboard on them, but the lag and poor accuracy meant you couldn't really touch type comfortably. Then multitouch came along and made on-screen keyboards much more responsive and accurate.

But also, Blackberry and (pre-smartphone) phones with SMS made people more comfortable with the idea of using keyboards for text entry on handheld devices. And crucially, auto-correct and predictive text entry covered up for accuracy errors and made text entry by keyboard even more attractive.


> [comparison to hand writing recognition]

Excellent point, stealing that. I work in automotive and an engineer, traveled around the world, think the realistic possibility of self driving cars without major changes in how we make roads, everywhere, is extremely low.


My error rate in recognizing handwriting would be much higher than a free, open-source recognizer. I am very bad at recognizing other people's handwriting. I don't understand your need for 0 errors. Real life is full of errors and imperfections, there is chaos everywhere. Seems like you expect unachivable.

On what grounds?

>...without a single brain-byte spent on ...

This is a glorious Gibson-esq turn of phrase that I hope goes down in history and is picked up in common vernacular.

Back on topic: The more I know about technology the less I want it or trust it to work. I assume this is similar to Pilots not wanting somebody else to fly, or surgeons not wanting to go under the knife.

We know, that we don't know enough about autonomous driving. Instead of the 'unwashed masses' saying "gee-whiz that is cool", we think "this isn't ready yet!"


We know that, but also we ought to know that pushing through the hard problems by rolling out systems operating world-wide and interacting with real people is the only way forward toward improved safety, reduced mental slavery to the menial task of driving, and the time and relationship freedom that comes with it. So yep, I let my Tesla do lots of driving. And I do it knowing that the brake pedal and the steering wheel are both manual controls that I can use to override the car at any time, no matter what. That is why I feel so comfortable.

And you know what? It's wonderful. It really is. I am free to look around and see other people on the freeway. They look so bored. So tired. So used up. Meanwhile my wife and I, we turn on the careoke machine and sing our way to the destination. It is absolutely the way to go. You can take my Tesla and its AP functions out of my cold dead hands. Maybe that's how I go, and I'm alright with it.


I wouldn't feel comfortable in a vehicle that may, or may not react to some situation on the road ahead. Because one day it will decide to do $something_really_stupid and you won't be able to react in time/correctly because you'll be too busy singing karaoke. With some extra bad luck, you'll die/kill someone else because of that.

Yeah, but is that a bigger risk than some dumbass in a truck blowing a red light and plowing into my door? Based on where Tesla is at I don't think it is. They're already reasonably below the median risk we all accept for driving at all. After all, my butt can feel that the car is doing something seriously unexpected way before my eyeballs can. So I'm going to keep singing, with my thumb gently on the wheel and one eye on the road.

With all due respect, that is just as likely to happen with human drivers as well.

> is the insane amount of PR, politics and marketing talk by people who have ABSOLUTELY NO IDEA

I have seen this too much. People who have a deep understanding of how things work are busy learning and doing. People who can spent their energy with politics and marketing can do it because they are not busy learning and doing. I which there was a solution for this. Corporate IT departments are particularly full of this.


I wonder why the self-driving car hype folks don’t simply lobby to make more trains.

(I mean the ones who have been successfully marketed to here, not the marketers).


We had self driving vehicles for 30,000 years. I’ve thought about just getting horse brains hooked up - but people probably don’t want their cars spooked by plastic bags.

How do you know the people bullish on self driving cars haven’t also lobbied for trains? There’s a lot more standing in the way of trains being built in the US, unfortunately. To me, self driving cars are just the second-best thing, and one that might actually happen. It’s a bit too late for trains to take over in most of the country.

You think that's bad? Listen to sales types talk about "blockchain" sometime.

note that "gradient descent" isn't AI either. it's more computational linear algebra: a heuristic for numerical methods used to solve (usually to a local extrema) systems of equations without direct analytical solution(s).

Sorry for the out of the blue unrelated reply, but I am currently stuck working as a technical consultant at one of the Big4, how did you make your way out of this? I feel like I am learning about antiquated technologies, and that pivoting to a software engineering job more aligned with my interests, skills, and sanity feels harder every day. Even something like you describe you are doing now sounds much better than what I'm doing now. Also I know exactly what you are talking about with the marketing and PR talk it is insane.

Some folks I know joined code camps/ lambda school like programs and got out. Alrernatively, if you work on clients, easy way is to accept a full time job at any of your clients you enjoy/tolerate working.

Sorry, late reply.

I got out by never really getting in I am afraid. I worked for 1.5 years in technical consulting as a student job while getting my informatics degree.

Once I obtained my degree, I declined an offer by said Big4 firm and took another offer where I got to go hands-on with coding. I had previous coding experience which helped and then amazing colleagues who boosted my start.


Five years ago or so, video from "DARPA robot challenge" type events would easily give the lie to "superb autonomy" claims. I just found this[1] more impressive, but at its 20x playback speed. I imagine that playback at 1x could still serve as a reality-check and counterexample.

[1] https://www.youtube.com/watch?v=v6-heLIg85o


It is a religion, nothing "faux" about it. It fills the same needs, and uses the same mechanisms within the human mind as more traditional religion.

It is also completely bonkers, as it makes bold assertions about the real world, with which reality will eventually disagree, whereas older religions tended to keep their most dogmatic positions unfalsifiable (the afterlife, the soul, vague prophecies, ...)


Calling it a religion makes about as much sense as calling efforts to cure AIDS or vaccinate Polio prior to the success of Salk a religion. There is a clear and obvious good to achieve, one which is theoretically implementatable and has predecessors for success. Reality can only disagree if it is outright proven impossible due to something - say in this counterfactual Polio has a 5% of turning any unique adjacent molecule it into more polio. otherwise they are literally Just like how taking the spit of a polio patient and putting it in saline with a sprig of mint will spread instead of vaccinating but it doesn't prove a polio vaccine is impossible because "we tried and people got sick".

It is clearly a goal and a realistic one within even a few decades pessimisitcally given that its ability is creeping upwards.


There is zero proof that autonomous vehicles can be safer than human drivers.

There’s lots of proof that autonomous vehicles are technically possible, but the leap to “definitely better than humans” is a very big one and it’s really being taken on faith right now.

In contrast, treating disease directly affect the incidence of that disease.


There's "proof" that computers can be safer than humans. Faster reaction times, don't get tired, don't lose focus, can perform computations much faster than humans.

All that goes to show that a computer can make mistakes much faster than humans. You didn’t say anything about how that guarantees the computer only makes safe decisions.

But that's kinda the whole point of AI, isn't it? It's a circular argument, computers currently can't make safe decisions, therefore they will never make safe decisions.

Computers don’t exercise judgment.

The point of AI is to have computers exercise some form of judgement. What is there to suggest that they can't do that?

Heuristics.

Auto breaking technology deployed in japan has already started affecting insurance because so many vehicles have it that the accident rate has fallen significantly. This is one facet of autonomous decision making.

I mean fully autonomous AKA Level 5. I agree that there is plenty of proof that driver-assist safety technologies can prevent accidents.

I'm very excited about the possibility of self-driving cars. Let the car drive while I take a nap or read a book.

But I have a hard time believing that the technology is anywhere close to being mature. You need a lot of contextual knowledge to drive safely in unusual circumstances. I totally believe that within well-defined limits, AI already outperforms humans, but traffic has no well-defined limits. Anything can happen.


Let the car go park itself while I walk into the restaurant. Let the car go fill itself up with gas at night while I am soundly asleep.

> Let the car go fill itself up with gas at night while I am soundly asleep.

I'm bearish on both self-driving and the universal adoption of electric cars, but everyone will be plugging their car in at home long before they can make one that drives itself to the gas station.


>everyone will be plugging their car in at home

Absolutely not. Having to wait for hours to get a few kms of driving distance is way too much of a friction point for EVs to ever be more than a novelty, in addition to the usual complaints of people with only street parking, garages without outlets, etc. Either we'll get the battery-swap situation rolling or invent a faster charging tech, but either way there'll be some sort of "station" in the picture.


> everyone will be plugging their car in at home

I realize that a lot of people here are privileged enough to own a single family home, but the majority of humanity lives in apartments and parks on the street. Trickle-charging at home is not a universal solution. The only practical solution seems to be some form of rapid charging of the car's energy storage. Either by pumping huge amounts of amps into a huge battery pack, or adding some kind of chemical fuel that gets reacted in an internal combustion engine or a fuel cell.


yeah, everyone is missing my point with this statement which is pointing out that it's never going to happen

Slow charging still works fine overnight, even if you slow charge on the street, instead of on your own property. Of course it would be nice if every parking spot came with rapid charging, but it's not like that's the only solution.

At the moment, policy in Amsterdam is that if you own an electric car, you get a charging point in your street. I don't know how fast those are, but they don't have to be fast. They're still useful for overnight charging, especially if the city continues to add more when more people get electric cars. I don't understand the argument that this is not in any way a solution. It is.


You probably could get self-driving cars if you really wanted to, provided that the self-driving traffic doesn't mix with regular traffic. And ideally without humans in the self-driving cars.

Not sure that's what the self-driving cars proponents envisioned though.


I often fall asleep on the bus, while reading a book.

SAE Level 5 is completely transformative, it will utterly change the market and the use of cars. Once you can conveniently call up an automated Uber the need, at least in cities, to buy your own car will effectively vanish.

This is Uber's endgame - get humans out of the loop. As long as cars still need human drivers this cost savings can't be realized.

> Forget better braking systems that apply themselves automatically.

This already exists: Mercedes, for example, first rolled out "Active Brake Assist" to a production model in 1996. Moreover any fully self driving vehicle will definitely need to be able to apply the brakes.


Uber has no relevance once level 5 exists. Vehicle manufacturers will run the vehicles directly if there is any profit to be made from a taxi service.

I doubt it. Vertical integration only works so far.

I think the manufactures will sell cars to those want to run a taxi company. This is, and will remain a race to the bottom because competition is fierce: people buy price and first to get to them.

They will also sell cars to normal people. Most people don't think about how convenient it is to have the storage of their personal car. If parking is free or cheap (which is it in the suburbs) having your golf clubs in the trunk or a spare diaper in the glove box are worth the little extra cost, not to mention your car is always there so no need to wait for the taxi to in busy times.

Of course there is also the city, but if you live in a city self driving cars still suffer from congestion and so public mass transit still has big advantages. In fact because the self driving taxi is spending time empty waiting for the next rider it adds to the problem meaning for even more people public transit is worth the hassles with it (which in turn means more demand to make transit better)


Congestion problems are dramatically alleviated if you can convert your city over to self driving only. Once the cars can drive in platoons they are packed in tight, no more accordion effect.

On the way something like UberPool would be needed. Or the taxi can just drive me to the nearest train station and I can take mass transit into the city.

https://en.wikipedia.org/wiki/Platoon_(automobile)


Platoons help, but there is only so much space on the roads. In short even allowed 5 times as many cars won't put New York subways out of business. Environmentalists will still want to put everybody on the bus (pick any mass transit technology) even in smaller cities.

> I think the manufactures will sell cars to those want to run a taxi company.

A self-driving car is software and hardware... you can sell hardware but software only gets licensed. Software is really where the value is, and that won’t be owned by anyone but the company that made it.

Look at the history of mobile phones, which were originally sold to consumers only by the carriers. As the phones got better and better software, the business model attached more and more to the manufacturer.


Why is it transformative? Especially in cities. You have Ubers/taxis/private cars/etc. today. So you hypothetically cut the costs (maybe) in half of hailing a ride. Which is speculative. Does that really transform your use of transportation? I doubt it would change my use of cars one bit.

There are tipping points. If always taking the taxi everywhere is more expensive than the cost of owning your own car, then most people drive their own cars and taxis are for special occasions. If after a tech change always taking the taxi everywhere is less expensive than the cost of owning your own car, then most people would do just that, and only a minority would own a car.

A switch from 20/80 to 80/20 is transformative, changes the default attitude and has further societal effects.


> has further societal effects

Like parking or more specifically - not parking. Here's an example development: "And parking clocks in at a full 29% of the developed land here, taking up twice as much total space as the actual buildings."

https://www.strongtowns.org/journal/2020/2/6/movie-set-urban...


Parking may be reduced but the flip side is that we will have hoards of always cicuclating cyber cabs or we’ll have private owners returning and summoning their private vehicle from free parking at home, thus doubling the trips taken

Not to mention sending a car out for errands. Self driving will free up parking garages, but fill up the roads.


> Once you can conveniently call up an automated Uber the need, at least in cities, to buy your own car will effectively vanish.

I doubt it. One of the biggest obstacles you have is families and the need for infant car and child booster seats. Car ownership is here to stay for a while.


I suspect that most of the people ready to ditch car ownership don't actually use cars much. In addition to the families/kids gear, a lot of people I know have their cars/trucks setup for various types of outdoor activities such as carrying canoes.

How likely is Uber to kit their entire fleet out with snow tires in winter? That’s an easy example of something a lot of private owners do for safety during the winter. I did that so I could drive to snowed-in trailheads. What about chaining up? And how will the self-driving cab handle getting stuck in the snow? Will there be sand available?

Even without doing any real off-roading, it's one of the things I always feel a bit uncomfortable with when I take rentals to relatively remote areas out West. If it were my own vehicle I'd carry a lot more gear to handle potential problems than I realistically can carry on a plane and throw in a rental.

What's the problem with families? My local Uber-equivalent will have and provide child booster seats if requested (as did many but not all "pre-app" taxi service providers), and for infants there are also a bunch of solutions depending on their age; one of them (perhaps not the best, just anecdotal example) is the combo carriage+car seat, where I can put the "carriage wheels" in the trunk and put the baby in the car safely without even waking them up, crucially, it can be done in any car or taxi, and I've taken taxi rides with a baby this way.

It does require some planning (and some support from the service providers) but it's definitely a solvable issue, if the people would want to do that, then it can be arranged.


> Once you can conveniently call up an automated Uber the need, at least in cities, to buy your own car will effectively vanish.

As a passenger, there is no difference to me whether my car/bus/train is self-driving or not. As long as I'm not driving, it doesn't matter if a meat, or a silicon neural network operates it.

Given the above, how does this make self-driving a transformative technology?


Hey man, I want all of it, especially while on the way to full self-driving cars.

Would Tesla be a heretic in this religion you just modeled? They're all about boring driver assistance, and acceptance of a low number of deaths. They don't even try Level 5 for now, and get a lot of criticism for requiring that the human pays attention at all times.

No they get criticism for calling the whole thing "autopilot" and presenting it as if paying attention was optional.

Even my engineer colleagues still operate under the impression that Tesla Autopilot == Self-Driving. One of the dangerous things, to me, is a combination of PR and user experience. The PR/advertisement creates the impression (directly or indirectly) that a Tesla drives itself, and the technology enforces the sentiment by working well 'most' of the time. This article just enforces this. The driver killed was an engineer, who noted that Tesla's auto-pilot would veer into danger (in the spot he was killed), filed a report/complaint, yet still, had enough confidence in the tech to read a quick text.

> and get a lot of criticism for requiring that the human pays attention at all times

Because that's the law... and...

> They don't even try Level 5 for now

That's not what Elon has been telling us for years... Full Level 5 is just months away!


Actually one of the biggest obstacles to Level 5 is expectation of Zero Deaths for Level 5 autonomous operations. The standard should be fewer deaths than human-operated (or even auto assisted ) cars.

When you have thousands of machines traveling in close proximity at speeds exceeding 50mph there will be deaths, this is unavoidable. We need to reduce those as much as possible but to demand ZERO before the technology can be used is just unrealistic

That said, Just because some people are working towards Level 5, does not mean all of the other things you are asking for are not also being worked on, it is not a zero sum game. There are enough people that we can have teams working on both.

This complaint is repeated for everything, "Well if people were not working on X drug that I don't care about they could cure cancer"

We can have a better braking system, better frames, etc AND still try to achieve level 5 autonomous driving. It is not an either-or proposition


>The standard should be fewer deaths than human-operated (or even auto assisted ) cars.

How do we go about testing this? By tallying up autonomous deaths until there are fewer per year than human drivers?

>We need to reduce those as much as possible but to demand ZERO before the technology can be used is just unrealistic

Human driver skill varies immensely by person. The idea that anyone who is (or even considers themselves to be) a "good" driver will never accept "average death rates" as a risk when getting in an autonomous car. I know I wouldn't.

The goal has to be zero or it will never be accepted by the public.


It isn't hard to show an order of magnitude less deaths even with non-zero deaths. At that point expect government to mandate level 5 on all cars.

The goal is always zero, but we all know that will never happen. Nothing is perfect. And assistance technologies may be more dangerous than level 5 because we physically cannot maintain concentration when few actions or decisions ar required from us. Some studies even indicate manual tranamissions make us safer, possibly for that reason. When level 5 is available, and it will be, because forever is a long time, good drivers shouldn't have to buy those cars. Assuming money is still a thing. Insurance comapnies may start forcing people financially towards sel drive only cars by hiking rates on humans, but if you give up your privacy with a driving monitor and they asses that you are safer, they would probably rather you drive and waive the hike.

Everyone thinks they are "good" drivers, even the person weaving in out of lanes, even the person that has a car that has 500 dents all over it from previous impacts that were all caused by "other bad drivers not me"

What will happen is human-controlled cars will become $$$$$$ to ensure once Level 5 is better than humans. At that point, if you can afford it sure you can reject it but get out your wallet


Why should insurance be more than today? Unless you're arguing that safety systems in other vehicles make you driving one without those systems more dangerous.

Insurance is about risk pools, if Level 5 becomes a reality the risk of a human driver will go up, and has more and more people adopt level 5 (which they will contrary to what people on here think) the number of human drivers to spread that risk over will go down, small risk pools with increasing amount of risk means higher premiums

It is laughable that you don't think the goal wasn't already zero. It has been zero the whole time.

Real life and ideals are different thing. You can't promise that accidents will never happen. But you can promise that accidents will be substantially reduced.

In the US, about 35k people die per year from motor vehicle related deaths. If you get it down to 10k, then that would be a major success. Of course, you will still be fine tuning until you could get below 1000 and as close to 0 as possible.


> In the US, about 35k people die per year from motor vehicle related deaths. If you get it down to 10k, then that would be a major success.

If we were actually serious about reducing motor vehicle deaths, we would mandate that every car be equipped with a breathalyzer device. No fancy new technology is necessary, and there's plenty of low-hanging fruit (Impaired driving) that we can deal with.

For some reason, though, the religion of autonomous driving does not consider this as a solution to minimizing road fatalities.


To take this further, it's been shown every year the numbers are released, that of that ~35k motor vehicle related deaths, ~20k are alcohol related.

On average, humans are actually pretty good at not dying in motor vehicle related accidents - or avoiding them altogether, given the sheer number of miles traveled per day in the US.

That, however, just isn't the narrative Self-driving followers want everyone to know.


One of the problems with merely fewer deaths than a human operated car is that technology tends to fail in 'silly' ways.

Also at the very least an self-driving car should reach the level of a good driver, having self-driving cars cause as many deaths as drunk or inattentive drivers do nowadays isn't defensible. Especially since there's usually no explanation and nobody to hold accountable.


>nobody to hold accountable

Well, one of the issues is that someone more or less has to be held accountable. And that someone pretty much has to be the manufacturer. No one is going to hand over full control to a vehicle and accept the responsibility if that vehicle commits vehicular manslaughter because "software isn't perfect."

It's actually an interesting legal situation because, other than maybe drug side effects, there aren't a whole lot of consumer products which, properly used and maintained, sometimes randomly kill people and we're OK with that because sometimes stuff just happens.


What about simple speeding tickets? According to Wikipedia[1], Tesla Autopilot max speed is a whopping 90 Mph!!!

Who's responsible if you get pulled over for going 75 in a 65 mph zone?

[1] https://en.wikipedia.org/wiki/Tesla_Autopilot


> technology tends to fail in 'silly' ways

And a scenario we can easily imagine is that a buggy update goes out to the whole fleet overnight that starts killing people all over the place.

The common case of accidents being on par with manual human driving goes out the window until the software is rolled back and for 12 hours, 24 hours, however long, we get a number of deaths that far outpaces what humans are capable of. The "worst case" would never apply to a manual/human population as a whole, at once.


This might be one of the few cases where it'd be better to not try to have all devices have the latest update all the time.

Count me guilty then, because I am also hoping to one day take a nap while the car drives me around. Until then, I found the next best thing is booking an Uber. They'll even use venture capital to subsidize my ride :)

But wait, isn't that exactly why they are now in such a rush and cannot accept anything less than full autonomy?

From what I heard, Uber has been burning billions of VC money to capture market share. And their model just won't work financially if they need to pay drivers a living wage. So they attempted trickery to pay them less as "independent" contractors, but now that governments are stepping in to prevent that, there is only one option left:

Uber needs self-driving cars or else they'll go bankrupt.

At least, that's my theory.


I'm not certain self-driving cars are going to save them. Right now, what they pay their driver's is the all-in cost for their fleet. When they switch to fleet Management, they are going to be paying capital costs and maintenance instead. I don't think the latter costs are much lower, if at all, than what they pay drivers now.

> Uber needs self-driving cars or else they'll go bankrupt. > At least, that's my theory.

I believe it--at least, I hope it would take an existential threat to make them push kludged-together pedestrian-killers into use on the roads.


There is this other technology called "chauffeur"

And in the third world (e.g., Mexico), they are an order of magnitude cheaper than a Tesla.

> So they attempted trickery to pay them less as "independent" contractors

An Uber driver is the very definition of an independent contractor. Many drivers also drive for Lyft. How can they be “employees” of Uber while also driving Lyft? Or doing Door Dash?

If Uber drivers were employees, they would have to work where and when assigned. As it is now, Uber drivers come and go as they please.

I am not sure how an Uber driver is any different than a freelance journalist or musician.


You are aware that people can have multiple jobs, right? It's very common for poorer people especially

And those usually have set, non-overlapping hours, even if those hours can vary from week to week. An Uber/Lyft driver can switch back and forth from ride to ride and has no set hours at all.

> This whole self driving, fully autonomous thing has become some sort of strange faux-religion.

Just follow the money. The near future financials of companies like Uber and Lyft (and to a lesses extent Tesla), rely on fully autonomous self driving.


And that is why they will fail. You heard it here first.

In addition to all the hype-sters and scammers, you're probably seeing a lot of the often young urbanites with dreams of never owning a car and getting chauffeured around by super-cheap robo-Ubers all their lives. Many of them are either in denial or are feeling pretty betrayed at the moment.

That said, there are legitimate issues with incrementally improving assistive driving. People text and drive today without assistive driving. If a car can mostly make its way in rush hour down the highway autonomously, does anyone think that people won't routinely watch Netflix on their commute?


Exactly. For anything but open highway, self driving is so far into the future, that we should be focusing on making smaller pieces safer, ultimately building up to the end goal. If there is a self driving car in my lifetime that will drive up the windy mountain road to Yosemite, in the winter, then I'm the Pope. This all or nothing shit has to stop.

I honestly don't understand why there isn't more attention paid/focus on the fully autonomous limited access highway driving in "good" weather use case. That would be hugely attractive for a lot of people. Read a book/watch a movie while heading up to the mountains for the weekend or, for many, take a nap for part of their morning commute.

I suspect it's because it's less interesting to the demographic that's more concerned about being driven to and from bars on their night out or who just don't want to own a car period. But highway driving seems like a huge convenience and safety enhancement even if you just punt on city driving for at least the next few decades.


> Forget better frame, chassis, and body panel design to protect pedestrians! We won't need them if AI never hits anyone.

Most car manufacturers have had this figured out for a long time with crumple zones and the like.

> Forget better braking systems that apply themselves automatically

Assisted braking technology is already implemented in some cars. Hell, Tesla implements basically exactly what you're asking for...

> Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.

Teslas don't let you sleep in your car, you have to move the steering wheel periodically to prove you're still paying attention or it'll pull over and shut down.

Also, I'm not quite sure what you're expecting seat belt enhancements to be.

Far be it for me to defend the AI hype, but your "things we should be focusing on instead" don't make much sense when we ARE focusing on them.


Modern cars are deadly to pedestrians, especially SUVs.

Most 1-3 ton objects moving with any sort of momentum are. What is your point?

70% more likely to be killed when struck by a larger car [1]. With a higher front face, pedestrians are struck in the chest, rather than the legs. Turns out, broken legs are a lot more survivable than damage to internal organs.

[1]: https://www.theguardian.com/cities/2019/oct/07/a-deadly-prob...


By that logic, a low slung sports car is the most pedestrian safe and Ferrari's should get a pedestrian safety rebate. I'll take one!

So long as there is sufficient space between the hood and the engine, which tends to be the main issue with very low cars. Sheet metal deforms as a pedestrian bounces off of the hood, increasing the interaction time and decreasing the instantaneous acceleration. This requires at least 10 cm between the bottom of the hood and the top of the engine. Less distance, and a pedestrian instead bounces off of the engine block, which doesn't deform on impact.

I was hoping for something mid-engine!

Reading through the crash test procedure, it is astounding how little attention is paid to pedestrians.

1. Front crash test. Procedure: Crash car into stationary barrier at 35 mph. Is also applicable to face-to-face crash with car of same size, going at same speed.

2. Side crash test. Procedure: Slam concrete block into side of stationary car at 38.5 mph.

3. Side pole test. Procedure: Drag car sideways towards a pole.

4. Rollover resistance. Procedure: Compare the cars footprint to the height of the center of gravity.

The biggest thing to notice is that not one of these metrics involves pedestrians. Metrics 1-3 can be easily improved by making a bigger car, elevating the passengers and providing more crumple room. Metric 4 is unaffected, as the track width is increased to compensate.

If a low sedan hits a pedestrian, the pedestrian rolls over the car, having a lower impulse given over a longer period of time. If a high SUV hits a pedestrian, the pedestrian is knocked back, having a higher impulse given over a shorter period of time. Safety ratings need to account for the danger cars pose to others.

Source: https://www.nhtsa.gov/ratings

Source (SSF): https://www.safetyresearch.net/rollover-stability


European safety rating has a category for "Vulnerable Road Users":

https://www.euroncap.com/en/results/tesla/model-3/37573


> Reading through the crash test procedure, it is astounding how little attention is paid to pedestrians.

In the U.S. at least, pedestrian safety concerns mainly affect prescriptive legislation (i.e. no pop up headlights). Some countries and blocs have testing similar to crash tests, but I'm not really sure how effective something like that is: any meaningful standard would need to have exceptions for different categories of vehicle. Though honestly I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being.


This a trap we all fall into. Because you are smart but don't understand that a thing could exist, doesn't mean it doesn't. We often use this crutch when absolving someone else of an action taken or a design flaw. "I would have never thought of that!" or "How could someone have anticipated that?"

> I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being.

https://en.wikipedia.org/wiki/Pedestrian_safety_through_vehi...


"Though honestly I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being."

I don't agree, there some definitive choices in car design that affect the aftermath of the collision that can have effect on the pedestrian surviving. As pointed above if a pedestrian is hit by a car he has a better chance to roll over the hood of the car vs an SUV where the pedestrian would probably would be hit and fall under the car.


I imagine any multi-ton mass moving in excess of certain speeds will be deadly to unprotected soft-bodied organisms. How do Modern cars differ from non-modern cars in this respect? We've added better brakes, back-up cameras and object detection to avoid running into people, hopefully reducing the number of incidents, but yeah you hit someone with a car moving at any appreciable speed and it's gonna do damage.

Look at European pedestrian safety regulations. There's a reason that the shape of the front of European cars is all kinda the same - they are designed to minimise pedestrian casualties.

Obviously no one is going to survive if you hit them at 70, but you can make a big difference in the 25-35 region that is the normal speed where there are a lot of people around.


Crumple zones are designed to protect the occupants, not anyone external to the vehicle. Granted, the kind of basic design changes that would pretty obviously help with pedestrian harm are also.....not sexy. So seems unlikely car manufacturers will sacrifice too much on the aesthetics front when car ownership is such a status symbol.

> Forget better frame, chassis, and body panel design to protect pedestrians! We won't need them if AI never hits anyone.

> Forget better braking systems that apply themselves automatically. We don't need that if AI can always avoid the need for sudden stops.

> Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.

Source? I never seen anyone arguing theses things for the sake of self driving. Are you just assuming that because people that want self driving really want self driving and the auto industry couldn't possible works on 2 things at the same time?


For selected use cases, self driving makes a ton of sense to me.

example: stop-and-go traffic - instead we could unlock millions of hours of human productivity (or provide entertainment).

example: self-parking and come-to-me, esp in closed garages. Parallel parking is hard for humans and we're poor at space utilization.

example: environments where obstructions are unlikely... airplanes have had auto-pilot for a long time... why not highway 80 in the middle of nowhere? why not trucks queuing to load/unload containers at port (or conference centers) - just drive up your incoming truck, grab your personal effects and take over the next outgoing truck while it queues for hours delivering a load and getting the next load.

There's lots of uses for self driving vehicles even before we deal with the hard cases. But they're not sexy and of course a tiny fraction of the labor savings and freedom-making.


> airplanes have had auto-pilot for a long time... why not highway 80 in the middle of nowhere?

You don’t get a stalled car on a Victor airway in the sky. You also don’t have to worry about obstacle avoidance for the most part, in the sky. If an aviation autopilot can’t hold the altitude or heading (such as in turbulence or in mountain up and downdrafts,) it will simply keep the wings level. Airplane autopilots follow explicit instructions: fly heading 143 at 14000 feet; descend at 500 fpm. Hold over a VOR using 1 minute legs at 200kts.

A car autopilot on the other hand, has to react to the physical surroundings. Not only “follow the I-10 at 75mph,” but also, watch out for incoming traffic, lane closures, or some kid on a bicycle that wanders into the road, or a dead animal in the road, or wet roads, icy roads, etc. There is no such thing as Instrument Flight Rules for driving, meaning a car autopilot has to be aware of the visual, while an airplane doesn’t: it just flies the precise route programmed without any awareness that the route might fly through a flock of geese. An airplane autopilot will fly you right into the ground if you let it. There is a lot of skill and training around airplane autopilot, and while it’s amazing and useful, it’s a lot more than simply turning it on and it flies you automatically to Denver.


I'm not sure what you are saying actually maps to reality.

Frame and chassis have never been safer and manufacturers continue to improve. Many (most) new cars have automatic emergency braking that continue to improve. New cars seem to have an ever increasing number of air bags to protect passengers.

All these things are happening at the same time that self-driving is taking place. Tesla FWIW is pretty good at all the above despite their focus on self driving as well.

I agree it is just as ridiculous to say we need level 5 to be make something useful. Will it be decades before we have cars without steering wheels? Sure, maybe even longer. But what exists is already pretty great in most environments and only getting better. (IE, crawling along in a traffic jam at ~15mph is something I would really love to never do again and it seems self driving systems can handle this with aplomb these days)


We could have achieved "self driving" in a sense, even back in the 80s, if we had the will to spend billions (maybe over a trillion) on putting monorail power line tracks on roads, having designed electric cars that latch onto them, and have a national traffic control system that controls/drives them. The traffic control system would have perfect knowledge of where every car is, and could direct traffic in a highly optimal manner from point A to point B for each vehicle. The vehicle's themselves would have systems in place that prevent collision (similar to the system that NYC subway has). Altogether, it could have been achieved, but would have required an unprecedented level of public spending.

What's really annoying to me is that we've been sold computer vision as the current solution, when it's obvious that if it is the eventual solution it's a long time out because our understanding of the field just isn't nearly as strong as it needs to be, but it's edged out a lot of interim solutions that could have been a lot safer, even if they have downsides.

For example, we could have embedded special markers (or spikes with RFID, or more active wireless, or any number of things that could have provided far more accurate lane detection as long as we were willing to require some up-front work to deploy it for special routes. Combined with very reliable lane detection and restricted to specific deployed areas where it could be tested, computer vision and/or radar/lidar for vehicle and large object detection (which would be mostly sufficient for most highway/freeway use) could likely provide a very safe system. The lower requirements for achieving safety might mean we could actually get some busses outfitted as well.

But that would require some actual state action, as no private company would (or should, if they were to keep it proprietary at least) deploy along large stretches of highway freeway. Covering I-5 from Northern California to Southern California would provide many opportunities, but be an enormous cost.


technology has made good progress improving safety, to the point that it's mainly a social problem at this point. we've allowed cars to be all sorts of things (status symbols, entertainment centers, etc.) other than transportation devices requiring high skill and attention to operate safely, at the expense of life and limb.

for cars to be safe for drivers and riders, we need to optimize two things and strip away the rest (especially an over-reliance on technology as savior):

1. minimize distraction and maximize attention on the act of driving 2. maximize the skill of the driver in controlling the vehicle in all sorts of (unexpected) conditions

technology can actually reduce safety, either because it allows drivers to pay less attention or it lowers the skill bar. driver assist technologies--lane assist or automatic braking--fall into this category.

that's not to say safety technologies shouldn't continue to be developed--structural crash safety improvements, for example, don't have the same detrimental effects on driver attention or skill (with the caveat that ever-increasing weights can decrease control and increase lethality).

it's important to distinguish technological advancements that acually improve safety rather than our perceptions of it.


Totally agree. We skipped right over "better driver training" in order to have meme stock stocks that short squeeze to nearly $1000. Such awful anti-patterns in humanity.

All of the other solutions you mention is where engineering resources are actually going, save for maybe the seatbelt enhancements. Every new car effectively comes with pedestrian crash prevention[0]; also, Volvo had a car with a hood airbag in 2013, and it looks like other auto manufacturers are looking into it[1].

Self-driving is only as hyped as it is due to the futuristic lure of the idea.

0: https://youtu.be/6owYPHpmDLU 1: https://www.autoblog.com/2017/12/13/patent-gm-external-airba...


Self-driving? It's called a bus.

I've been thinking for reasons to avoid Kickstarter and Indiegogo (as I got scammed on Kickstarter, and am not happy with the overall quality of several succeeded projects), and you gave me inspiration. Thank you.

"I avoid new technology, exactly because I'm an engineer."

I would never call myself an engineer, although I have worked in environments where there were lots of engineers (not in the software world) and one definition of "engineering" that I heard was:

"Meeting the requirements while doing as little new as possible"


That's a very accurate definition in the mechanical world. Every junior engineer with an overly complicated, cutting edge, unproven idea gets shot down pretty quickly. In our shipyard safety is more important than anything else. After that is cost.

I don't understand why if you've gone through this much trouble, you don't have a blog post or even a medium article to cover your findings. I'd be very curious to see the responses from those authors and other experts in the field about your findings.

There's just something dubious about how it seems like you consistently find mistakes and problems in these papers. I'd be stunned if there was any expert that wasn't aware of the shortcomings of using a kernel that's as small as 3x3.


You’ve summed up what threw me off about their comment quite well.

Another thing, just the time alone needed to evaluate technology the way they are talking about sounds quite staggering.


This is an oversimplification of Tesla’s technology, and they don’t use Tensorflow, other people’s pre-trained models, and computer vision isn’t the only thing that is a part of driving, they use radar and ultrasonics as well.

I try new technologies all the time to get a sense of where it is in coming to fruition. I think what you mean is, you’re more skeptical of claims.

The same way you claim you can’t learn anything about NY in your bathroom, you don’t know anything about Tesla or Self-driving if you haven’t tried it. You should at least test drive it under controlled conditions where you feel comfortable before closing yourself off completely.


> You should at least test drive it under controlled conditions [...]

Who cares what the car does under controlled conditions? I'm sure the manufacturer did exactly that in their testing. Even when they test on public roads, there's a hands-off safety driver behind the wheel, who is paid to be on the lookout and sufficiently alert to take over in case of an unexpected excursion. (Unless the self-driving car under test is from Uber, in which case the safety driver simply watches video on their phone. Too soon?)

This is nowhere near how these cars are used in the real world. The real world is not a set of "controlled conditions", so any comfort one builds up in such a situation is merely a false sense of security.

> [...] where you feel comfortable before closing yourself off completely.

So, here's the thing: I'm comfortable driving myself. I don't get distracted, I use good judgment, I consistently prioritize the safety of my vehicle's occupants over everything else. I know exactly how flawed self-driving cars are, and how far behind the curve of my driving skill they will remain within my lifespan. That's the sum total of everything I need to know, and no amount of "controlled conditions" demos will change my mind.

P.S.: If you're from the future and you're reading this because I got mowed down by a self-driving car: ha ha! Joke's on me.


My comment about controlled conditions was about making you feel comfortable and give yourself a safe to try it out to get an understanding of when it. It wasn’t to say to believe in self driving, I agree it’s a long way out. Understanding the technology is more important than dismissing it altogether is what I was simply trying to point out. I think you can at the least try it, understand it, then have an opinion about it.. (which I respect). There seems to be a lot of negative comments from people who have never sat in a Tesla or have gone through a test drive.

I'm not OP, but even if a test drive went perfectly, I would remain worried about the chance of the car randomly killing me for some stupid edge case reason.

Maybe not today, not tommorrow, but maybe six months in the future when the weather and road conditions happen to be just precisely right to confuse the system at just the most dangerous time.

In the meantime I will just read/watch the stories of people more trusting than me about how well the technology works, and currently those stories don't fill me with confidence.

IMO, this is currently dangerous technology that should not be allowed on the road at all.

Common-sense tells me that these half-self-driving systems are dangerous.

I would like to see a study that tested the reaction times of a person who sits doing nothing for a hour and then is suddenly expected to take evasive maneuvers, versus a sober - or even a drunk - driver who is actually driving the car continuously.


Again, going back to: see if for yourself. Experience it.

Then have an opinion, otherwise it’s like reading about NYC and saying you hate it because you read the reviews.


I've seen the news reports and discussion about the catastrophic failures, and that's enough for me.

Of course I can have an opinion without going for a ride in one, and that opinion is that I don't trust it and I won't "experience it".


I've driven a Model 3 over the course of a few days, maybe a handful of hours in total, and based on that experience I absolutely do not feel comfortable using Auto Pilot and would not buy a Tesla at the moment.

It's far more janky and susceptible to confusion than Tesla makes it out to be in its marketing, and the reality is that people simply do not pay as much attention as they are required to when using it because Tesla has convinced them it's magic that's safer than anything else on the road.


Thank you for having actually tried it then having a real opinion about it.

I have a Tesla, I usually use autopilot for highway traffic only, summon it like a valet to where I am in my parking lot, and not have to idle in hot and cold weather.

I agree, I wouldn’t use it for local roads and unclear highways, but isn’t this what they tell you? I don’t think they ever tell you that it’s full self driving right now. Also, I’ve experienced it being janky but over time it’s improved dramatically.


>not have to idle in hot and cold weather.

As a car and efficiency enthusiast, I totally try to keep my gasoline powered car from idling unnecessarily.

But what does "idling" mean in the context of a 100% electric car?


You can have the car on with AC and/or Heaters without the engine on because there is no engine. In fact they have camper mode where you can camp out in the vehicle overnight with the ac on, and huge batteries allow you to do this without worries.

>summon it like a valet to where I am in my parking lot

It can drive by itself from where it's parked to where you are?


It sure can. I have a kid so putting him in the car next to another car in a parking garage is painful but with summon everyday it’s amazing

After looking into it a bit more, it seems useful in some circumstances as you describe, but hardly "summon it like a valet" when it can only move a few meters.

From your description, I was thinking more of something like waiting at the entrance to a public carpark and the car comes to you.


No, I have smart summon. It can actually go from the parking spot, turn toward me and pull out up until where I am standing. This is real look it up on YouTube or I’ll send you a link myself

https://youtu.be/rXbUYoerHAM

This is called smart summon. It can turn, reverse. It’s not perfect and if you know it’s limitations you can manipulate it so that it works 99.999% of the time


Response makes sense, I've never thought about it before but I don't like gadgets at all (EE and programmer for 25 years). I like simple things, don't have home assistant, don't have tablet anymore, rarely use computer at home, like simple things in my musical instrument and tv setup, no games consoles (used to love games as a kid but not anymore). It's less to worry about and I feel I get more done.

I love this take!

I’ve been similarly untrusting of a lot of “high tech” approaches to various things, and I derive a lot of joy from products/services/etc that take a “back to basics” or at least minimally- or non-digital philosophy. In particular: I have an affinity for automatic watches and carbureted motorcycles.

If nothing else, it’s a bit of a break from what feels like a constant struggle to keep all the gears turning at work.

BUT... I’ll confess I’m also a sucker for innovations / the occasional new hotness. I recently upgraded a Kawasaki KLR650 (a competent but... “well tested”, shall we say? motorcycle) in favor of KTM’s top-of-the-line adventure bike. The technology difference between the two (despite only 3 model years between them) is incredible: the latter adds 5x the power, ride-by-wire, cruise control, lean-sensitive ABS/traction control, an up/down-capable quickshifter, probably a thousand other improvements.

One day, about 1100 miles into owning the new bike, the dash pops up a low tire pressure warning from the tire pressure monitoring system. It showed the rear pressure was fairly low, and sure enough, I’d picked up a small screw between the treads.

Certainly a TPMS is nothing compared to anything self-driving, but honestly it was a bit of a wake-up call — I WANT systems on my bike to increase my safety level.

I’m not really sure what the lesson is here. Maybe “Look for the middle grounds (the ABS/TPMS-maturity systems) between ancient technology (anything on my beloved KLR) and bleeding edge (non-replicable papers on self-driving cars)”? Seems like this holds up ok, especially as a consumer of those techs... But maybe not for the innovators?


> I avoid new technology, exactly because I'm an engineer. > I wonder if that is just me

Can't find it now but there was a poignant quote or anecdote I read the other day that expresses this exact sentiment - the more you know about technology, the less likely you are to use it. I think it was in the context of e.g. smart homes and voice assistants or online tracking - if you're aware of how much data they hoover up and what can be done with that, you'd be Very Afraid.


It would be nice if that sort of awareness was more prevalent on HN.

The cult of cheer, eyes waxing over from a disappointed jet-age generation, and a little shiny syndrome.

https://news.ycombinator.com/item?id=22088547

Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!

Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.

https://biggaybunny.tumblr.com/post/166787080920/tech-enthus... (via foxrob92)


I think it should be emphasized this accident occurred in May 2018.

I've got a beautiful 2001 Mercedes-Benz ... it drives well, avoids all the trendy stuff but still has too many electronic components (most recently I had to replace the computer board that decides whether you should be able to shift). My daughter has a 1971 Super-Beetle she's stored in my garage ... when it breaks, it's something mechanical!

I agree with all but one of your points, and they're all well made.

However, by saying "AI is stochastical gradient descent optimization" you're equating AI with Machine Learning/Deep Learning.

The list of AI technologies also includes things like Artificial Life, Genetic Algorithms, Biological Systems Modeling, and Semantic Reasoning.

I suspect that when we get true AI, i.e. Strong AI or Artificial General Intelligence, we will achieve it through a combination of these techniques made to work together.


AI is not stochastical gradient descent optimization; neural nets are; but AI is bigger than that - planning, deduction, game theory and so on. You can find out buy buying and reading a text book.

Plus... what’s the point? Maybe we automate a couple jobs for long haul shipping or road trips but what a fruitless effort!

Why don’t we work on drones that pick us up and take us places to really leap ahead, get out of traffic, and do something amazing?

Cars driving themselves? How incremental.


"As a result, I have become the late adopter among my group of friends because the first iteration of any new technology usually just isn't worth the issues." This is only true for things where reliability matters more than the new features the new tech provides. In the case of cars, of course, I'd prefer reliability.

But for other things like the original iPhone, sometimes new tech is just better than whats out their even if there will always been some flaws in the first versions


"I have become the late adopter".

Exactly my thoughts when reading about the blended wing aeroplane, yesterday.


3x3 pixels in a CNN isn't just 3x3 pixels though. As the layers get pooled together even though the sliding window stays the same size (3x3) it's moving over images that are getting compressed 2x every time you pool. So in that last layer you're looking quite farther than "3 pixels".

> That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background.

I don't find this to be the case with most ML researchers. Is it possible you have misunderstood some of these papers? It is, after all, hard to jump straight into a new field.

> The second paper converted float to bool and then tried to use the gradient for training.

This sounds like binarized neural networks. If that's the case, they keep the activation before binarization to use for backpropegation.

> The third paper only used a 3x3 pixel neighborhood for learning long-distance moves.

A single layer of 3x3 convolutions would not be able to model long-distance moves. But I have not read a single paper where they have only used one layer. Is it possible they stacked multiple conv + pooling layers? The receptive field of each unit higher up in the stack grows pretty large in the end.


It's not. I got my pilot's license and fly in a technically advanced aircraft, and all that it means is that there's quite a bit of automation there to help you out. The lessons imbued in it, the issues you learn exist, actually using it in very critical phases of flight, etc., builds an appreciation for both the wonders and dangers of automation.

Going through that experience has 1000% made me more weary of autonomous vehicles.


This is why I don't understand the sad conclusion of the story in the article. The driver knew that AutoPilot had issues with that stretch of the freeway and was able to repeat the issues on several occasions but still thought it best to be on his phone, while driving 71 mph, at the mercy of AutoPilot? Especially considering that he was an engineer who had expressed concern over this, it seems silly to me that he wouldn't just take over manually during that stretch... He didn't and now he's gone because of it. It's not worth the risk/reward.

We don't need to understand it. We just need to know that, even armed with all that knowledge, he still made the decision he did, and that we should not expect others with less knowledge of how software works to be any less human.

This has been my takeaway from the whole situation. It says something about the exactly how safe this stuff needs to be for it to be considered reliable.

My impression is that it's already more reliable than humans. Not perfect, but it probably averages better than all the crashes humans are causing.

> it's already more reliable than humans

Only when comparing aggregate statistics. Not all humans are equally (un)safe drivers; insurance rates vary based on driving record for good reason.


If anything I would expect less knowledgeable people to be more skeptical and cautious than somebody who has a vested interest and passion for technology. I can't imagine anybody other than a tech enthusiast who would maintain trust in a self driving car technology that previously attempted to veer them into a barrier.

100% agree with this. Most people I’ve talked to about this incident think the guy is an idiot; if you had found a car’s feature to be unsafe on a portion of your commute before, why on earth would you trust it with your and other’s lives? If you had a new fancy belaying device slip in the gym, you wouldn’t use it on your next big wall climb.

I also don’t get how everyone is forgiving him for being on his phone, in a construction zone no less. Reckless driving is reckless driving, being an Apple Engineer and Tesla owner doesn’t somehow negate that he was being a belligerent driver.


It's not about forgiving him for being an idiot. It's more about recognizing that there's more than just the one idiot on the road. To an approximation, pretty much everyone who has ever had a driver's license has been guilty of making an idiotic decision or two while behind the wheel.

And, Autopilot being a technology that, among other things, enables (or even encourages) idiotic behavior, there's real risk that placing too much blame on the driver's choices lulls us into an attitude that enables the next idiot to kill themselves and/or someone else.


I agree that everyone makes bad choices from time to time on the road. But as a society we put 100% of the blame on the driver if they aren’t doing everything in there power to prevent an accident. “Everyone makes mistakes” isn’t a get out of jail free card, booze clouds people’s judgement to the point where after enough drinks they’ll think they are good to drive, but that doesn’t change anything if they get behind the wheel, its still a DUI.

This guy was on his phone in a construction zone and crashed because of his lack of intervention, using a driving assist feature doesn’t somehow absolve him from being so preoccupied that his vehicle veered off the road and into a wall. Imagine if instead of him dying he had killed a construction worker; I have no doubt a jury would find him guilty of manslaughter. When you get into the driver’s seat of a car you are taking on responsibility for a death machine. I find it troubling that this conversation is happening at all, the blame should be put squarely on his shoulders.

Iff the car suddenly slammed the wheel to the side causing him to lose control or became unresponsive to his inputs, that would be another matter. But this could have been prevented if he wasn’t being grossly negligent of the risk he was partaking in behind the wheel.


An ounce of prevention is worth many, many pounds of assigning blame after the fact.

(Unless you're plaintiff's counsel, of course.)


> We just need to know that, even armed with all that knowledge, he still made the decision he did, and that we should not expect others with less knowledge of how software works to be any less human.

This is maybe one of the most important lessons of the 20th and 21st centuries (at least thus far): knowledge does not automatically prevent us from errors in judgments nor does it necessarily protect us from misfortune.


Nor does it absolve others from liability.

knowledge does not automatically prevent us from errors in judgments nor does it necessarily protect us from misfortune.

Relevant xkcd https://xkcd.com/795/


Not automatically, but it can help.

I was in this very situation at the top of Pikes Peak. A storm moved in and the park rangers closed the place and sent everybody away. They knew the statistics of being hit by lightning in that very place.


My friends like to talk about how awesome it would be to time travel or know what the outcome of a future decision will be. Even when armed with knowledge from observed outcomes of near-identical setup scenarios, majority of the time they end up doing exactly what they would have normally done. Happens to me from time to time.

Maybe we should keep it the way and use it as a culling method.

It was a mistake, people make them all the time. Mistakes in cars kill people, and you don't need to be driving a Tesla to see that.

Frankly the whole point of automated control is to reduce this kind of mistake fatality, and... I mean, it's working. This was a tragedy for sure, but it was also fully two years ago. These events haven't been recurring, it's likely the specific proximate causes have been addressed, and by all reckoning these systems (while still not flawless!) are at or above parity with alert human drivers in terms of safety.

Basically, I read the same facts you do and take the opposite conclusion. People make bad risk/reward decisions all the time, so we need to take them out of the loop to the extent practical.


His phone was using data, that’s not proof he was using it and distracted. Spotify streaming a new playlist could be responsible for that.

> Recovered phone logs show that a strategy game called “Three Kingdoms” was “active during the driver’s trip to work,” the NTSB said. Investigators said the log data “does not provide enough information to ascertain” whether Huang “was holding the phone or how interactive he was,” though it said “most players have both hands on the phone to support the device and manipulate game actions.” Huang’s data usage was “consistent” with online game activity “about the time of the crash,” according to the NTSB.

Maybe he wasn't using his phone but he was distracted or maybe even asleep. Otherwise why let the car crash?

Maybe he was fulfilling Apple's demands: either attend a meeting on the phone while driving, or wake up earlier.

(Definitely not singling out Apple here.. at IBM I had a coworker who was in an accident during a phone meeting- luckily non-fatal).


I have coworkers who regularly take meetings or text chat/email while driving. It makes me uncomfortable. I have tried hinting that it's a bad idea, but don't feel comfortable telling them directly (don't think it would do anything except harm our relationship due to me questioning their judgement) or reporting to HR (fear of retaliation, some of the coworkers are senior to me in the management chain). On the other hand my discomfort kind of makes me feel like a busybody. 99 times out of 100 nothing will happen and the 100th time will probably just be a fenderbender. Not sure what to do except continue feeling uncomfortable with it.

"Are you driving right now? We can reschedule to something more convenient for you? I don't want you to get pulled over."

You're way too cautious in your human interactions, which is terribly sad. But I understand it because of our business culture.

We had this PM who ran our meetings, when people phoned in from the road she would say, "please take your calls while your aren't driving" and boot them from the meeting. Now I do the same thing and you should to.

> Recovered phone logs show that a strategy game called “Three Kingdoms” was “active during the driver’s trip to work,” the NTSB said. Investigators said the log data “does not provide enough information to ascertain” whether Huang “was holding the phone or how interactive he was,” though it said “most players have both hands on the phone to support the device and manipulate game actions.” Huang’s data usage was “consistent” with online game activity “about the time of the crash,” according to the NTSB.

I didn't see that in the original article, but I do see it here:

https://www.washingtonpost.com/transportation/2020/02/11/tel...


Ah, I read the wp article yesterday and just assumed it was the same article being discussed here.

> ...but still thought it best to be on his phone, while driving 71 mph, at the mercy of AutoPilot?

> Records from an iPhone recovered from the crash site showed that Huang may have been using it before the accident. Records obtained from AT&T showed that data had been used while the vehicle was in motion, but the source of the transmissions couldn't be determined, the NTSB wrote. One transmission was less than one minute before the crash.

For all we know, that could mean he had spotify on.

Anyhow, "he was worried about it" is no reason to shift the blame to him.


I disagree. He was informed enough about the technology and aware enough of its limitations to write extensively about it. Tesla and all other assisted driving technologies all give the user a warning to stay alert and maintain control of the vehicle. He was clearly distracted somehow because he was relying on the technology.

Also:

> Recovered phone logs show that a strategy game called “Three Kingdoms” was “active during the driver’s trip to work,” the NTSB said. Investigators said the log data “does not provide enough information to ascertain” whether Huang “was holding the phone or how interactive he was,” though it said “most players have both hands on the phone to support the device and manipulate game actions.” Huang’s data usage was “consistent” with online game activity “about the time of the crash,” according to the NTSB.


Why was Autopilot driving above the speed limit?

Or does Tesla allow you to set the speed?


It allows you to set the speed but only up to 8 mph above the posted speed limit (or something like that).

As far as I know my model 3 at least doesn't limit the auto-pilot speed other than the max which is 90MPH. It's just a scroll wheel, you can set it however you like.

I never used FSD but AP never let me go above 55mph in a 45mph zone when it was using traffic-aware driving.

He was an engineer - he probably wanted to try and "solve" the problem (isolate the car's behavior and see what inputs are causing the incorrect reaction) and engineer a solution. As an engineer, I can understand this drive. He probably felt he was on top of it because he knew of the bug.

I can understand that drive too. I cannot understand allowing a technology that you know to be unsafe take over completely so that I can play a game on my phone.

That’s the part I don’t get either. The only thing I can come up with is he wasn’t paying attention to the drive up to that point and didn’t notice he was in a problem area.

I have a Tesla and there are definitely problem areas. You learn them fairly quickly when you are taking the same route all the time and you’re trained to either turn off autopilot or at least be alert when going through those areas. Or maybe you test it out with your hands on the wheel ready to take over to see if they fixed the bug.

There were a couple spots on my normal driving routes where the car would inexplicably swerve. It happened one time in each spot and that was enough. Both those spots have been fixed since, but there’s no way I’d be on my phone not paying attention driving through there. I’m still cautious. There are two more spots where the car will brake to 45 mph on the highway and then speed back up after a few hundred feet. I am always on high alert around there and usually won’t even use autopilot in those areas.


> The only thing I can come up with is he wasn’t paying attention to the drive up to that point and didn’t notice he was in a problem area.

It's a well known phenomenon that the more you automate away routine tasks, the harder it is for the driver to stay alert and take over in non-routine situations. My understanding is that airplane pilots are specifically trained in strategies to avoid falling into that trap.


Tesla should pay for time spent in autopilot in these cases. It’s clearly not a finished product. That could incentivize attention paying as well.

> Or maybe you test it out with your hands on the wheel ready to take over to see if they fixed the bug.

Isn't holding the wheel always required now in Tesla's autopilot?


You have some flexibility to take your hands off the wheel for a bit. It nags you after a little bit and then gives you some more time to hold the wheel. If you keep your hands off for too long it will disengage and you can’t put autopilot back on until you put the car in park. You can satisfy the sensors by softly resting one hand on the wheel.

Many opinions, but little data. It's hard to assess anything under this circumstances.

> That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background. AI is stochastical gradient descent optimization, after all.

ML is only one part of 'AI'


>I avoid new technology, exactly because I'm an engineer.

There is a saying where all software (security) engineers dont have IOT or any "Smart" devices in their home.


It should probably be called "CoPilot" and issue significant alerts as to when it has a poor understanding of the situation so the driver is aware to at least be more alert.

This is not new technology at all but perhaps this is technology which is being used in an unnecessarily complex manner. I sat in on a lecture given by University of Maryland circa 2000 here in Florida concerning their W4 video surveillance system which could tell the difference between different objects quite well. It did this on a 400Mhz dual-pentium machine and could even recognize different signs at driving speed. This was before TensorFlow or Python really took off or GPUs where powerful and did not require a supercomputer to operate or a cloud either it was just Math:

https://dl.acm.org/doi/10.1109/34.868683 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.8.9...


ML hype spawned a lot of self driving car hype, and a lot of promises which can't be delivered in any short time frame. It started with Comma.ai, but others picked up the full ML torch and went with it. Other companies are making steady, sure progress using the old school robotics approach of throwing sensors at the problem, Waymo, for example. Others are being reckless and trying to use vision alone (Tesla). It's a mess.

However, safe self driving is coming, slowly but surely (I work at a company which produces tech for these guys). The hyped companies are in trouble, but car OEM's, partnered with companies you've never heard of, are making slow, steady progress, all the while being subject to government functional safety requirements, particularly in the US and EU. There is zero chance that a Tesla or GM car will be allowed to fully self drive, so no matter how advanced these systems are, they're sold as level 2 systems requiring driver oversight and qualify as cruise control in regulations.

Today, we have full autonomy in some truck routes (only as proofs of concept), in ship yards, parking shuttles, mining equipment, quarry trucks, etc, places where the problem domain is more constrained. Generalized self driving is a ways off, but by the time you can make use of it, it will be safe, it just won't come from Tesla or Cruise or Uber.


The biggest tech thing I am afraid of that all other people are excited about is digital election. ML field is pretty clear about its limitation.

"I felt that way looking at state of the art AI vision papers" - which papars you looked at?

Tesla has some blame in this for sure but Caltrans (maintainers of the 101) are equally if not more to blame.

California is rich in capital but so poor infrastructure. Moving here from South Carolina(where taxes are much lower and roads are paltry) I was shocked at how dismal a state many of the roads are and at the sheer amount of time it takes to complete.

Patrick Collison of Stripe actually wrote a blog piece(1) condemning SF for their lack of speed when it comes to road maintenance.

There is no end in sight. Until we we a nation hold our governments accountable for road maintenance and proper engineering we will never have any great resolution.

(1) https://patrickcollison.com/fast


Many US citizens I talk to, seem to have the stance that they do not want to pay any taxes and that taxation is theft.

I am sure California and others could do more and be held accountable (every service provider should and must be), but that attitude has to change too.


California has tax level on par with most of EU and would be pretty much the most left leaning state in US.

CA already has some of the highest taxes in the nation but the roads and public transportation aren't leaps and bounds better than the cheaper states. I'm not convinced raising taxes would help without fixing the accountability issue.

If you know your autopilot is malfunction on a certain highway segment, you have observed it malfunction several times. Why would you still use it ? If you were a brain surgeon, and observed that your brand new ultrasonic scalpel created bleeding, reported it, but decided to still used it. The day a patient dies of an hemorrage post-surgery, would it be the fault of the company producing the ultrasonic scalpel or yours ? Those clowns admit to knowing it was faulty exactly where it crashed and are still suing Tesla. This habit of Americans to sue every time they have an issue is exhausting.

I drive with autopilot (auto steer as Tesla calls it) daily. Frankly it’s a mixed bag. Anyone who has used it with any frequency should know how dangerous it can be.

I use it because it usually brakes faster than I do when traffic suddenly slows down. It also slams on the brakes at random points on a 3 mile stretch of highway on my way home. For some reason it thinks the speed limit is suddenly 45 and brakes aggressively. I’m afraid one day I’ll get rear ended when it does that.


People have these vague notions about what technology can do because "influential" people in our industry read sci-fi badly like children.

The fundamental problem lies in the usage of the word 'autopilot' for a glorified cruise control system.

Why are there still these death traps on Highways ? No matter whether driving on AutoPilot or Lane Assist or manually, it is statistically quite likely a car can and will drift of for a second.

I have trouble thinking of any such places on the German Autobahn where drifting of would lead to a full head on collision with a concrete barrier.


Open source software and dataset to verify its effectiveness but restrict it for commercial purpose would help build confidence in auto pilot.

I don't understand something. He had complained in the past about AutoPilot steering the car towards the divider. Then why was he still using the AutoPilot in that situation??!

For every tragedy, there are tragedies avoided. I can attest to a few. In the last 10,000 miles, Autopilot has: safely swerved to avoid a car that would have sideswiped me, preemptively braked after detecting the 2nd car ahead (not visible to me) had slammed on its brakes, and avoided colliding with a completely stopped vehicle in the center lane of the freeway.

And FWIW I've never felt misled about Autopilot's capabilities. I started off skeptical and it's since earned my partial trust in limited scenarios. Ironically its hiccups actually make me more attentive since, ya know, I don't want it to kill me.


That squarely puts Autopilot in the "Driver Aid" category. That is fine, just don't go telling people that it can drive unassisted.

Before owning a Tesla I did feel like Autopilot was overhyped, but I now realize that's more because of media coverage than anything Tesla is actively conveying. The vehicle software actively encourages driver attentiveness. Moreover when you actually experience Autopilot, it becomes abundantly clear that it can't drive unassisted.

Then don't call it Autopilot and stop saying it will be able to drive you on the highway soon.

> Moreover when you actually experience Autopilot, it becomes abundantly clear that it can't drive unassisted.

I think it's different: On long stretches of highway, it actually can drive mostly unassisted. It can't drive unassisted in city traffic or in construction sites, but I totally understand why people would overrate its capabilities. In clear and good conditions, it's actually more than a simple drive assistant.


> just don't go telling people that it can drive unassisted.

No one does. Literally no one. No owner, not Tesla themselves, not their website, not their sellers. Why do people keep bringing this up. Everybody knows it is not full self-driving, yes we all know, yes and? Where does this zealotry come from?


Their marketing for for a long time structured such that you got the idea that is was a literal autonomous driving ability. They've only toned it down after multiple accident, and I'm sure it's the reason so many people still believe it to this day.

So the "problem" you referring to is solved? Their marketing is good now? I just think that there is no reason keep repeating the old things then that are not relevant anymore, that don't even refer to anything anymore and only create a lot of confusion.

(Let's just assume it was a problem, even though I don't ever remember being fooled by it or seing any commercial or anything without a disclaimer "NOT FULLY AUTONOMOUS SELF-DRIVING, REQUIRES DRIVER ATTENTION".)


They should start with not using a misleading name, Autopilot.

Autopilot is not a misleading name actually. You, the press and others just have ingrained a false definition of the word. "An autopilot is a system used to control the trajectory of an aircraft, marine craft or spacecraft without constant manual control by a human operator being required. Autopilots do not replace human operators, but instead they assist them in controlling the vehicle."

If everyone defines Autopilot as an automatic system, shouldn't the definition in the dictionaries be updated? That was a rhetorical question, because it happens all the time.

Besides, Tesla claim that it can drive you unassisted on the highway, so they are very much coining that definition as well.


Everyone does not. This is why you see this many people have a problem with how you seem to use that word.

Not everyone as the pedantic everyone - it's stupid to argue semantics. Tesla claims when you order a car that it has

> Full Self-Driving Capability

And in my experience 9 out 10 people take that literally.


From the hundreds of YouTube videos where people sleep at the wheel of their Tesla or do something else.

Are you making Tesla inc responsible for videos that some other people put up on the internet? How would you even propose to police this, to make it illegal to use a car in a video unless the usage is somehow approved by the car manufacturer?

- Tesla makes a product and calls it Autopilot

- Autopilot has an option for "full self driving capabilities" (with lots of promises for the future and an unclear description what it actually can do right now)

- People buy a Tesla and indeed seem to have a self-driving car if they sleep at the wheel

- Tesla doesn’t attempt to make it right that this shouldn’t be done as far as I can tell

- Some people believe that Teslas drive themselves mostly.

Is Tesla directly responsible for these videos? Not directly, but at least they benefit from a highly suggestive product.

Could they do more to reduce the suggestive marketing? Absolutely. Do they want to? Probably not. Should they? I don’t know. That is what we’re discussing here.

I have a bit of a background with medical devices. For these, you have to make it absolutely clear what the device is for and what not. You also have to have a post market surveillance system that monitors how people use or abuse your medical device. From that perspective, Tesla could be doing more to have their driving assistant be perceived in a more realistic way.


Unfortunately, it's probably down to the choice of name "Autopilot", that leads folk to make assumptions.

In the last 10,000 miles I had no accidents. In fact, I had none in the last 50,000 miles. It is a big question if any of what you mention here would be as much as a touch, if you did not have autopilot in the first place.

That's fair. In ~200k miles of driving I've also had zero accidents so maybe it's a stretch to call these things 'tragedies avoided'. But the point stands that Autopilot does little things every day to improve safety, but they get little media coverage since they're mundane and boring.

How many accidents did you have per 10k miles in your last car? I've never had an accident in the last 200k miles, across three cars; none of which had anything more advanced than regular cruise control.

Zero accidents in 200k miles, so "tragedies avoided" was probably the wrong way to put it. My point is: for every one tragedy, there are hundreds of little ways Autopilot makes everyone on the road safer.

I don't understand, why did he kept trusting the autopilot on that stretch of road after the first incident?

The headline alone should show, what is odd with this case.

* Tesla clearly states, that the autopilot is a driving aid and should not be used unobserved.

* The engineer complained about the autopilot not working perfectly at the very place of crash. (Quite understandeable when seeing the state of the road and mostly absence of line markings in videos of the crash site).

* Evidence seems to show that prior to the crash he did not pay attention while driving on autopilot at this very part of the road.

So I really don't get why anyone whould trust their life to a driving assist by not paying attention and closely supervising the system.

Where I think the lawsuit should have a merit and a good chance of succeeding is the state of road and the maintenance by Caltrans. They are in my eyes liable for the deadly outcome of the crash and that there was a crash at all:

* Left exits by themselves are a safety issue and should not exist (extremely rare here in Europe).

* The line markings were almost non-existant. Potentially dangerous sections of roads should have very good line-markings.

* As a consequence, it seems to be common that cars crash into this divider. That there was a "reusable" crash protection at it, should show the dangers of this section clearly.

* The crash protection was not active which lead to the deadly nature of the crash, as a car had crashed into it a week before. No other protections were in place (traffic cones, barrels)

So this section is shown as being very dangerous as human drivers regularly crash into the divider, there are no traffic cones (or have been destroyed by people almost crashing into the divider) and it was not secured after the last crash. The exit and the left lane should have been closed for traffic as long as the crash bareer is not in place.


Nice, blaming the victim there.....

Bad pavement markings and "work in progress" sections, exist in almost every major highway... it is not an exceptional case, and it should not be treat such. The expectations is that the technology should work, or give you enough advance warning for you to take control...

There have been plenty of videos, where the tesla autopilot seems to just steer straight towards a side wall, with no warning whatsoever...

A Lidar would/might have prevented it... it would have served as a warning that the car is going to smash straight ahead into a solid wall and that the visual cues (lines in the pavement) are misleading.... I don't think just a camera system is good enough for automated driving.... I feel Elon is beta testing with people's lives...


Considering the victim has complained that the system does not work well at this very spot, the question stands, why did the victim use the system unobserved at this very spot?

And independently who was in control of the car, I showed why this section was in a miserable state of maintenance and dangerous by its setup, as humans regularly crash into this very section. So this section is dangerous and should not exist in such a dangerous form, especially, as the installed safeties were not active at the time of accident.


> Tesla clearly states, that the autopilot is a driving aid and should not be used unobserved.

From Tesla's own promotional video[1]:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


This is not demonstrating the system as it is sold in current cars. It is using hardware, which was not present in the crashed car (the hardware is standard now with all current cars sold) and it is using software, which has never released to the public. This is solely a demonstration of what the cars can do in the "lab".

> the autopilot is a driving aid and should not be used unobserved.

That's not an autopilot is it. It's driver assist. Like every other car manufacturer calls it (and incidentally, they don't have these problems).


Mercedes called their cars even "self driving" for a while until this was struck down. And show me the vehicle, where the term autopilot is used with a system which does not require the supervision by the pilot or helmsman.

In any case, while one might ask Tesla not to use the term "autopilot", they always clearly stress the requirement of supervision by the driver, including giving warnings and eventually stopping the vehicle, if not having hands on the wheel.


Suing caltrans for failing to maintain the roads? Probably the same people that voted for Newsom and the redirection of funds for roads to high speed rail boon doggies.

The thing about self-driving cars is that even though we may get to a point where they are statistically better at driving than humans, the reasons they crash will be unrelated to the driver. This means that humans, thanks to modern life, are even further removed from any concept of self-determination.

Being a responsible person and not driving while drunk, or driving in a reckless manner will no longer improve your chances of not being seriously hurt in a car crash. While driving on the roads already removes a lot of self-determination due to the fact that you can't control all the high-speed objects flying around you, there was at least some degree of improvement possible by being responsible.

Once self-driving cars are available, whether you live or die will be up to random chance. Some people will be better off due to the self-driving cars, and others will be worse off, depending on if they were above the average or below in terms of their driving behaviour.


He complained once before about this stretch, and yet he kept driving auto pilot at the same location? I would never dare to do that, and sounds suicidal. Also I would never dare to fully trust auto-pilot and I think they should change the official name to driver assistance instead, until auto-pilot is ready.

Which is how Tesla want to treat it according to the article: "Tesla says Autopilot is intended to be used for driver assistance and that drivers must be ready to intervene at all times."


I would not be quick to make assumptions like this. He could have experienced the issue like 1/20 times, and intended to remind himself to watch out when he gets to that spot. And maybe he forgot this time. It’s easy to get into a trance on long repetitive drives. It isn’t like he immediately did back to back drives and changed nothing the second time.

And that’s the problem with an “autopilot 99.9% of the time, but still pay attention just in case” feature.


Which is why Waymo never released half self driving technology, their early tests showed that even if they told Google engineers that they were being recorded and they'd be fired if they didn't watch the road, in a few weeks a lot of them trusted the software too much

Source?


It is not suicidal, but probably worthy of a honorable mention in the annual Darwin Awards roundup since it was a very bad judgement on his part.

As an engineer - he should have known better than to continue to use the feature the moment it became unreliable for reasons unknown. Those kind of failure modes are extremely devious killers.


So all the engineers who experience a BSOD, they discontinue using Windows immediately after?

Totally not the same thing. If my car's steering locks up and I can't turn because of a software glitch, I'm returning that thing to the dealer ASAP.

This is exactly what the navy did when the bsod parked that destroyer in a harbour 20 years ago.

When you have mission critical pieces of software (aka stuff that kills you) starting to behave in unexpected ways with unknown failure modes you stop using it asap until fixed.

I will gladly continue using my bsoding pc for gaming, but I won't use it in data center or to control a shuttle launch.

If a person uses gas oven that leaks from time to time and only a bit - how will you classify him.


It's time Elon should really consider augmenting Tesla's cameras with LIDAR.

My god the endless stream of 'tEslA BAd!' itt.. You all just mad you didn't buy stock?

This man already knew autopilot would have trouble in this spot and yet choose to play on his phone. He is an idiot.

And to those who way 'they shouldn't call it autopilot if it can't do full self driving!' I suggest you go read up on auto pilot. Direct from the definition: 'Autopilots do not replace human operators, but instead they assist them in controlling the vehicle.'


I spent some time looking through the NTSB docket today. I was a little surprised that Apple was willing to provide logs from a development model iPhone. Evidence seems to point to the victim playing a game while commuting -- in my opinion this is a major indictment of Tesla's approach to self-driving, where the user is lulled into a false sense of security.

Another docket was updated today, a Tesla crash in Florida eerily similar to the one a few years ago: semi turning across the highway, Tesla goes under the trailer. Not a great failure mode.


Obviously this is completely Tesla’s fault. But you gotta ask - if he knew it was malfunctioning there, why keep using it there?

Autopilot works great!

Until the moment it kills you. Then you become the statistic.

Better buy some life insurance with that expensive robotic car.


So autopilot was having problems at a particular place every time and he was still using it. As much as Tesla is to be blamed the driver is too. Why the fuck would you trust something to work if you yourself have experienced it not working multiple times.

Not just autopilot, but regular people too where confused by the road, "In the three years before the Tesla crash, the device was struck at least five times, including one crash that resulted in fatalities."

I thought at first this was some new crash, but it actually occurred in 2018.

I’ve been using commaai EON for a while and very happy with it. They’ve done the “what it knows, what it knows it doesn’t know” very well.

The system will beep hard at you when the vision system doesn’t have confidence. It only has confidence in well lit, well marked roads. Otherwise it yells at human to take over.

Take eyes off the road, it yells at you. The disengagement rate is pretty high but I like it. I know it does 405 and I5 highways well and that’s where I use it.

Easy to understand the limitations. They also make you explicitly say yes to all the limitations on first start.


Buried in the middle of the story:

"In the three years before the Tesla crash, the device was struck at least five times, including one crash that resulted in fatalities. A car struck it again on May 20, 2018, about two months after the Tesla crash, the NTSB said."

Well that story was a total waste of space and time and existence. The roadway is unsafe, and it has nothing to do with Tesla. As usual. I remember reading about a guy who wrecked his Tesla into one of those pieces of concrete Dallas loves to stick in lanes. And I thought, "I nearly did that once myself". You're driving along at 60mph, round a corner, out of nowhere there's a big piece of concrete angled a bit to pretend like it's a legit lane change, like "there, job done".


Fail fast, fail often doesn't work when your life is on the line. I'm sure Tesla did a good job testing but this is one of those things where the world simply is hard to work out what happened. Mind you that the self driving car is probably way better than a comparable human driver. Why do we not feel bad when a human accidentally kills himself from faulty programming too?

Well, being an engineer and understanding how fraught with problems AI is (well, it is not AI it is just ML), I would not be handing my life to the algorithm yet.

I understand these things are probably necessary step but maybe it shouldn't be possible for a car to drive in complex traffic without steering and focus of the human driver, yet.


> 50,000 cheeseburgers are eaten every three seconds in America.

Utter bullshit.


Maybe there is something wrong with the coffee I have had today but I utterly fail to see the novelty or notoriety in the title as a whole or any subset of it. Can someone maybe point it out to me?

People complain a lot. Engineers are people. People still do things they complain about. People die in crashes. Cars crash. Teslas are cars. What am I missing?


I mean like, he had previously complained about the car swerving towards a barrier on the 101. He later crashed his car into the same barrier while on Autopilot which took his life.

I think the intuition is that like, if a piece of technology repeatedly shows itself to drive your car into a concrete barrier, maybe don't play video games while passing that same concrete barrier.


... while using the same auto pilot that earlier steered your car towards certain doom

I don't get it! He complained that it was malfunctioning but he continued using it?

I can sense the fruition of a conspiracy theory here...

I'm not convinced we should entirely blame Tesla, given that humans have been crashing into the same barrier:

> In the three years before the Tesla crash, the device was struck at least five times, including one crash that resulted in fatalities. A car struck it again on May 20, 2018, about two months after the Tesla crash, the NTSB said.

The article also says the engineer had complained about his Tesla veering towards this particular barrier. I don't understand why he still relied on the autopilot while driving past it.


Sad story, and condolences to the man's family, but if I'd previously complained about such an issue before, I'd have been damned sure to be careful at that place again and/or would have lost any trust in the system until another iteration proves me otherwise. Yes, hindsight is 100% and we all make mistakes, but still.

I hope people aren't commonly doing what this poor guy did, turning autopilot on and taking their hands off the wheel at 71mph. At this speed, the stopping distance is greater than the max effective distance of some of the forward looking cameras, and a collision means almost certain death. I'm surprised people will trust any autopilot system at a potentially fatal speed.

I wonder if we get skewed opinions about autopiloted cars because every story about them blows up, while people accidentally killing themselves in cars don’t.

This is from 2018 and the driver didn't even have his hands on the wheel, though he knew there were problems at that spot. Very careless driving. Personally I wouldn't use a mobile game while I'm driving a vehicle, nevermind software in development.

Why don’t we introduce some of the same requirements for drivers as we have for private pilots (think traditional bi-annual flight reviews)? Not only would we most likely reduce fatalities, accidents and injuries but we’d probably also eliminate many unsafe drivers from the road while placing a forcing function on public transit to increase usage. We’d probably also see a knock on effect to the training market and increase employment in that field. We might also see an increase in tax/fee income for government entities to help reduce reliance on gasoline taxes. Obviously I haven’t done a rigorous analysis but it would seem like a win all around in my humble opinion.

Any politician/party who goes for such a policy would lose a lot of popularity.

A slow rollout could happen where this short renewal time only applies to new licence holders, but then it'd take 20 years to really see its effects.


Can't speak for the FAA but the CAA only requires me to sit with an instructor for one hour every two years to renew my SEP rating.

It's not onerous and I think you're right, a sit down with an instructor every two years to correct any minor bad habits before they get out of control is an excellent idea.


Complains about autopilot.

Plays video game in the car with hands off the wheel.

Well, okay.


Obviously Tesla is going to hide behind the consumer's decision here. Autopilot scared the shit out of this guy on multiple occasions, and he kept turning it on and trusting it with by far the most deadly thing he did every day. Odd choice when you put it that way, but one a lot of people make, I'm sure, maybe one that I would make.

I really hope the government isn't too aggressive in hindering companies from doing self-driving car R&D on public roads, but I would be fine with the government saying consumers shouldn't be used as guinea pigs for systems that are less safe that typical human drivers. Anecdotal evidence can't establish that self-driving cars are less safe than humans (even if it establishes that they make mistakes that humans wouldn't) so I would expect a numbers-driven standard, but self-driving technology shouldn't be unleashed on consumers until it achieves at least a human level of safety.


I use the radio in my car probably less than 1 percent of the time Im driving. Most of the time I'm listening to streaming media podcasts, soundcloud etc so why do all the accident reports stress DATA streaming on the phone like that isn't pretty normal these days? Clearly doesn't mean texting and driving although even texting can be done purely hands on wheel eyes straight ahead just using voice - its like the press is still living in 2002 not 2020.

I don't trust my car. I roll down the windows if I leave the keys in it for any reason so I don't get locked out when it decides to auto-lock itself or leaving it running or to step or whatever reason. I sure am not trusting it to drive itself. Maybe in 2150 when the technology is hammered out.

I have a Tesla and think Autopilot is neat. But I never rely on it, because it is blatantly not safe enough to be relied upon.

It looks like more non-auto piolet drivers have died at this location than auto piolet drivers.

Regular Tesla Autopilot driver here:

I just don't get this situation. With Autopilot, it takes a tiny amount of pressure on the steering wheel to take over. The pressure is so small that, when Autopilot jerks the wheel, you're more likely to turn it off.

So, assuming the driver was paying attention, with his hands on the wheel, this makes no sense. The only way the accident makes sense is if the driver fell asleep, wasn't paying attention, or accidentally turned off autopilot.

And falling asleep, or not paying attention, is a real risk in any car.

(Note: What I remember from older articles about this topic is that the car was nagging him for awhile to put his hands on the wheel.)


I had a scary experience in a Model S on auto-pilot going over a bridge where the car swerved left and almost hit the concrete barrier. Not entirely sure what happened but since then, I've been hesitant to use auto-pilot 100% of the time.

Why would you use it at all at that point? If this guy took his experience with autopilot the first time it almost crashed his car, he'd probably still be alive.

Yesterday here on HN there was a top thread about how a open dataset used for training self-driving cars is rife with missing labels and mislabeling. [1]

Quite a few commenters insisted the issue was a non-issue -- that all datasets are noisy and mislabeling occurs all the time, that nobody's building actual self-driving cars from it, that surely if an object isn't labeled in one frame it'll be labeled in the next.

Now obviously we don't what the cause of this particular crash was.

But I will say that I found people's willingness to defend widespread sloppy labeling in training sets used for literal high-speed life and death situations rather shocking.

And that, hopefully, crashes like this serve to remind us of the far greater responsibility we have with regards to quality and accuracy when it comes to building ML models for self-driving cars, rather than when we're merely predicting how likely a credit card customer is to pay their next bill on time, or which ad is likely to be most profitable.

[1] https://news.ycombinator.com/item?id=22298882


The consensus was that the training set in question was not used for any real life situations.

The only player in autonomous vehicles that doesn't understand the gravity of the technology is Tesla/Musk.


Does anyone have a link to the actual NTSB documents linked in the article?

> https://www.ntsb.gov/news/press-releases/Pages/NR20200211.as...

Looks like the ntsb.gov site is currently down and they haven't been indexed on archive.org yet.


I was quite confused by this. He knew the car was taking a consistently erroneous action that was likely to have fatal consequences, but he continued to use autopilot on that same stretch anyway.

To be clear, I think Tesla has been cavalier and careless in their approach to autopilot.

But Walter Huang knew what he was doing and did it anyway. His death is largely by his own hand. I don't see this as different from people who knowingly take other careless risks and pay a price.


Think of the "dumb" folks who take a selfie on top of a skyscraper and then slip off. It's not until you are careening towards the abyss that the reality of the situation is fully learned. Up until then, only theoretical.

Reading this article I got a big flash when I read "slammed into a concrete barrier", because it nearly happened to me a few weeks ago and it shook me quite a bit to think of the implications. It wasn't a self driving car though, it was just a bit of aquaplaning, but I was still letting myself distracted enough too close to the concrete barrier and not be aware enough of my environment by missing the water accumulation on the side because of the melting ice/snow.

That didn't made me think that cars are too complex and should no longer by driven, that made me think that I should be more careful on my driving. Yet here we are arguing whether that's a signal that we shouldn't allow autopilot feature.

Autopilot or not, YOU are still the driver. If there's a mistake, YOU should be responsible for it. As long as there's a steering wheel, and I believe there will be one for a pretty long time, you are still the one responsible to make sure theses things doesn't happen. If you can't manage it, don't use autopilot, just like if I can't manage to drive, I won't use a car. Getting a driving license may seems like a formality, but it's not, it's a test to see whether you are apt to drive.


>Autopilot or not, YOU are still the driver. If there's a mistake, YOU should be responsible for it. As long as there's a steering wheel, and I believe there will be one for a pretty long time, you are still the one responsible to make sure theses things doesn't happen

But that totally defeats the purpose of autopilot...


> But that totally defeats the purpose of autopilot...

Of a true autopilot, sure, but for now they all are pretty far from being one. For now it's only purpose is to have to make less effort while driving.


How can Tesla be liable for this? You should not trust software to drive your car without human intervention. I don't understand the US culture where everyone is entitled to sue everyone.

Short answer is that the U.S. relies on litigation to fill the void left by a lack of regulation. Tesla wouldn't be liable for this in Europe because they're not even allowed to sell this technology there in the first place:

https://www.cnet.com/roadshow/news/tesla-model-s-model-x-aut...


So guy notes autopilot malfunctions in a specific area of road, uses it anyway, and dies. Am I missing something?

I can’t help but think that our culture-wide experiment in the normalization of deviance when it comes to speed limits (both how they are set and how they are observed) has something to do with this.

A crash at 55 mph would have 40% less energy than at 71 mph.


The NTSB report information is public, from the NTSB docket, https://www.ntsb.gov/investigations/Pages/HWY18FH011.aspx, he wasn't traveling at 71 miles per hour.

"At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected."

The autopilot accelerated into the barrier.


Could that be because it effectively registered a lane change by the car it was following and was accelerating to the requested speed?

Or are you suggesting that the Autopilot accelerated beyond the top speed set by the driver?


You are right "traffic-aware cruise control speed was set to 75 mph". It was traveling slower due to traffic.

Tesla's autopilot has now actively hurtled their cars at full speed into concrete barriers on more than one occasion.

They really need to work on this, or they will ruin self-driving for everyone. This is basically a new class of collision, this is not the ordinary way people die on highways. If they can not avoid actively smashing their customers at over 100km/h into solid concrete barriers, on major roads near their company headquarters, then I don't know how they can make this feature so accessible to a broad customer base.


I am an expert in AI. Learned the symbolic kind before the AI winter, then worked on neural nets (stealthily) during the AI winter.

Automonous driving cannot yet handle all the edge cases, and when it fails people die. What's worse is when it fails it cannot explain why it failed, so we just throw more training data at it and hope for the best. That's not engineering; that's ML cargo cult voodoo.

Modern ML is great for finding family members in your photo collection, but nobody dies when it labels your dog your grandmother.

Tesla makes great cars but I never use their autopilot. They should focus on what they do best and give up this obsession with autonomous driving.


Makes you wonder if the point of technological progress is to actually make human existence safer and easier or if it's just to push up a stock price. I'm still hopeful it's the former.

We'll hear the same canned response about how many lives have been saved, but just veering off into a concrete wall spontaneously? That's completely fucked.

The fact that millions of these vehicles are deployed with this software running should concern every other driver on the roadway. Remember when Tesla said 1 million robotaxis by 2020? o_O


This was mention in the article discussed here when the family first sued:

Tesla issued a statement placing the blame on Huang and denying moral or legal liability for the crash.

“According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location."

https://news.ycombinator.com/item?id=19802284


I've always thought that having a person take over from a computer when driving is a fundamentally flawed premise. People, by nature, are not going to devote enough attention to a task they're not actively involved in and they won't be able to take action fast enough to avoid an incident because of the time it takes to context switch from whatever they were doing to handling the situation that just came up.

Legal | privacy