> Traffic Light and Stop Sign Control is designed to come to a complete stop at all stoplights, even when they are green, unless the driver overrides the system. We found several problems with this system, including the basic idea that it goes against normal driving practice for a car to start slowing to a stop for a green light. At times, it also drove through stop signs, slammed on the brakes for yield signs even when the merge was clear, and stopped at every exit while going around a traffic circle.
(emphasis mine)
Is that really how this feature is intended to work?
It will notify you that it is approaching a traffic control, and if you do not confirm that you want to proceed through, it will slow to a stop. Presumably the idea is this is safer than blowing through an intersection accidentally, which seems reasonable.
You do not have to confirm if there is a car in front of you or it sees other cars proceeding through the intersection.
I think it (stopping at traffic controls) works quite well for a feature that rolled out only a few months ago. The article is generally accurate, but almost all these features are still labeled "beta" and work pretty well, though certainly not perfectly. The idea is they will improve over time and eventually get there, the article kind of assumes they are all supposed to be flawless which I don't think is particularly accurate.
If you interpret "Full Self Driving" as "everything works flawlessly" it's going to be a disappointment. That's fair enough for a naive reading of the phrase, you can argue that's overselling the potential of the feature, but perfect driving at this time is not what is actually claimed.
I think on balance all these features are pretty positive and designed smartly so that the driver can understand what is happening and correct the car's actions, and it does seem like they are improving over time.
One doesn't interpret "self-driving". The definition of self driving is something that drives itself. Period. Adding "full" in front of it makes it emphatic. It's the same as saying something is "free" or "completely free".
There's no nuance here. He's lying in his marketing. Even a Musk fanboy can't argue that
I'm not saying it isn't good, it just isn't what he claims it is.
I just am not that interested in the semantics of the name. I understand your point, that’s fine, I didn’t name it.
The idea is that you buy an option for the car that they claim will not now but in the future allow the car to qualify as “Full Self Driving” by some definition. It is not claimed that it will do that now, it’s fine to be skeptical that will ever happen to your satisfaction, but the feature has always been something that will be delivered over time.
I don't consider driving through stop lights a beta feature. Even if only happens one time out of a thousand it's still a threat to life, and a possible class action suit.
There isn't room for a threat-to-life feature to improve incrementally, and it's very curious to see anyone suggest that it's somehow tolerable to risk the lives of customers with not-quite-there-yet-but-we'll-get-back-to-you-soon features.
That's pretty much why they require the driver to be paying attention with their hands on the wheel at all times. Any user who treats it otherwise is asking for trouble.
Yeah, like any car if you do not operate it properly it will fail to handle an intersection.
I don’t think I buy that a driver assistance system that does not always intervene is dangerous, when the expectation, design, and legal agreement is you must drive the car responsibly. I’m not sure it’s conceptually different from a safety perspective as something like cruise control.
As far as I can tell it's the same. The only exception to that is the cruise control (TACC), which is not in beta, but you still need to do all the steering for that.
I don't think it's unreasonable to read it that way, but for Tesla it's the name of a particular package that is meant to allow the car to drive itself _at some point_ via software updates and they don't claim it does so currently.
It's legal and commonplace to pay someone to build a house for you, and even to pay them before the house has been finished building.
So the question for Tesla is, do they make it clear that what they're selling is a promise to build a self-driving system for the car later, and not a self-driving car now?
I think there's a pretty good argument that some of their advertising is misleading. But I don't think it's as ridiculous as some people here make it seem. I just checked, and they make it pretty clear before purchasing:
> The currently enabled features require active driver supervision and do not make the vehicle autonomous.
I would find it kind of satisfying if Waymo solidly won the self-driving race. It would be a triumph of solid engineering over unreliable "good-enough" solutions.
Go can be simulated and self-trained. Self driving cars are robots interacting in the real world with multiple, stochastic agents. Real time hardware systems that are reliable and commercially viable will also be extremely difficult.
You need a mass amount of data to train on. How do you get that? Its an engineering or a business problem, depending how how you look at it - how do you get a ton of people driving for you?
Tesla has the better business model here. That leads to more data, which is the most important factor in training.
Now, google is orders of magnitude bigger and had a huge head start, perhaps they brute force their way to the lead. But being more conservative != better engineering. You could philosophically disagree with many things Tesla is doing, but in terms of solving self driving cars I think they have the better strategy that gives their engineers the data they need.
Tesla is in the real world recording every car they sell through vastly more dynamic scenario/terrain. That’s smart engineering.
Oh boy, a real engineer is here to tell the immature ones what’s what.
First, you made a huge assumption. I never said I approve of how they are doing it. Engineering at the highest level encompasses morality, obviously.
But you wooshed real hard on the context of this thread in your rush to get that sweet feeling of moral superiority.
I’ll rewind it for you.
I was replying to someone claiming they hope Google delivers self-driving first because they see them as executing better at the engineering portion of doing so.
Do you see now? If they had said they see them as engineering more ethically and so they hope they win, fine. But they didn’t. Read it again.
The context is: is Google’s engineering strategy going to deliver self-driving first?
I replied to that. I even called out that you can totally disagree on moral grounds, and I do disagree with much of what they do, so it’s funny to call me out on something I was leaving purposely open as a further topic to discuss.
But the context was who would deliver self-driving faster, due strictly to their engineering strategy, I made the case that real world data trumps artificial data. And it does.
When a civil engineer designs a structure that fails and kills people, not only are they held liable for the deaths, they stand to lose their licenses. In civil engineering, there is a culture of ensuring it never happens again after engineers design something that kills.
Who at Tesla was held liable for designing a system that failed and resulted in death? Who at Tesla lost their licenses in the wake of deaths caused by their self-driving systems? Where is the culture of ensuring that their cars don't kill anyone again?
These are great questions and I’m glad you’re bringing ethics in. But the parent posters context was that the “solid engineering” by google === faster delivery. I think that’s backwards. If you are an advocate of being more conservative with engineering to be safer - then you can’t have your cake and eat it too, you should hope it goes slowly. That’s literally the whole point.
I’ll be the first to criticize Tesla for the shoddy, dangerously worded rollout of Autopilot. Totally. Has it saved more lives than lost? Who knows.
But to argue that googles more conservative approach should somehow deliver FSD faster is just a contradiction. Perhaps they brute force it like I said already. But the fast rollout by Tesla == more dangerous == more data. If you’re talking engineering delivery time, then yea Tesla is taking a calculated risk of getting much more data much faster. They should be able to deliver faster, engineering-wise.
Self driving can and likely will eventually save lives. The medical world deals with the same basic issue constantly and neither reckless behavior or extreme caution is the answer. At best you can model how much of a reduction in the 1.35 million people die in road accidents worldwide every year / 3,700 deaths a day offsets moving more quickly.
If we are talking an eventual 10% reduction waiting just 1 year costs ~130,000 lives. On the other hand if mistakes slows long term adoption that’s also harmful.
PS: Adoption curves etc are also import, but that’s what I mean by mistakes slowing adoption.
> Self driving can and likely will eventually save lives.
And yet there is no evidence of these assertions other than wishful thinking.
There is no evidence that self-driving systems are safer than humans, so there is no reason to treat them as if they are. In fact, compared to other luxury vehicles, Tesla vehicles are involved in more accidents.
The medical world is extremely conservative. The FDA has incredibly high standards to meet, thankfully, and demands ample scientific evidence for the medicines and treatments it approves.
"Move fast and break things" doesn't exist in the medical world, but it does exist outside of it in the form of unregulated supplements and illegal procedures and distribution of medication. People are routinely made into victims of this market just so someone can make a quick buck.
If we knew the results it wouldn’t be research. Medical research is dealing with potential future solutions to known problems. The risk is from a drug that’s either useless or actively harmful. Saying we don’t know if self driving cars are eventually going to be better is therefore no different than considering a new vaccine.
Anyway, move fast and break things is ingrained into the core of the medical profession when there is an unmet medical need.
The FDA for example has: Accelerated Approval, Fast track designation, Breakthrough therapy designation, and Priority Review all designed to speed things up when the benefits are significant.
PS: I think we can agree that in terms of car safety there is clearly a large unmet need. That’s not to say self driving is the solution, just that it’s a serious contender.
If you want to make the case that being less conservative will be better in the long run, you have to make the case that you're definitely on the right track, and doing it in a riskier way will speed up the eventual solution. You might accidentally go down the wrong path full steam and cost more lives, or just plateau in a dangerous way.
If I offer you an untested unreliable cancer cure, on the grounds that moving slowly kills more in the long run, I'm likely to make things worse by putting out a non-cure. We could do some really fucked up human experimentation to save more lives in the long run, but that's not the ethics we've collectively agreed on.
We do some rather twisted things in medical research. Handing out placebo’s is a critical part of research, but consider what that means for a serious medical condition.
Informed consent is a really big deal, but not a blanket exception for anything. Compensated participation for example is tricky. That’s the kind of issues that make medical ethics a complex subject and there are often difficult choices to be made.
I disagree, federated learning is a very valid approach. The cars have a ton of computational power especially with HW3. (Although I think they don't have high precision floating point - the chips are for inference)
Federated learning is a lot like distributed training of a neural network.
The trouble with distributed training is that it's not as fast to converge as simply running more training steps. It is basically like increasing the batch size.
Also, it sounds like a general hassle for Tesla to use federated learning. I believe they need to be carefully auditing and labelling their training data for almost all their tasks. Perhaps some like depth-estimation don't require labelled data.
It's not live streaming all the data. They have an independently running AI that looks for events/problems and streams problematic areas. The cars do this by default.
I don't think it is completely a foregone conclusion that self-driving is even that much of an ML problem.
It might be possible to handle steering, navigation, obstacle and accident avoidance purely with non-ML approaches. Then ML could be left to less safety critical features such as predicting the types of obstacles surrounding it, and their likely next moves. LIDAR (gasp!) might even help make the problem much easier.
So no, I don't think that just saying you have more data is the end of the story. It's certainly AN approach, and maybe a good one, but I don't think you can conclude their engineering is better because of it.
As ML has demonstrated, more data doesn't mean better data.
Tesla wins got having the most data about driving in the Bay Area. And not so much for driving anywhere else, or in bad weather, or on bad roads, or on highways where trucks may cross both lanes of traffic.
Oh even the parking lot + rain is one of those problems where the problem is not the lack of computational strength, it's the complete lack of viable ideas. There's a reason they test drive these things in Phoenix.
A crowded parking lot of a mall can be challenging for a beginner human driver, it's an absolute AI nightmare as there can be totally unexpectedly moving cars and pedestrians at any place and time. And then {rain,fog,snow} messes with LIDAR and you have a half blind (at best) system trying to navigate an extremely challenging environment.
It will only work when all cars have a sensor set and can communicate effectively to cover the entire area. I don't see how it can possible work if you throw human drivers into the mix.
Do you mean long range EV wishful thinking, improved battery technology wishful thinking, market-changing utility scale battery wishful thinking, Supercharger network wishful thinking or some other kind of wishful thinking?
So you’re saying touching a Supercharger in 110F temps tomorrow and discharging it into a lithium battery contained in a hot car that’s been driving through such temps for hours is totally safe!
This is so silly! It’s not even clear what this is an opinion about! How would you lose the the race with solid engineering? How would you win without it?
Shoddy engineering wins all the time! Ever heard of the DOS, Windows, or Android (in the early days)?
Tesla could win, with shoddy engineering, if they can convince the public and government (using the classic reality distortion field :) ) their self driving tech is ready for prime time (when, for the sake of argument, presume it's not ready).
They could then use the investment dollars generated to take it to the next level and perhaps make it solid over time -- but it could take a long time.
In the mean time, the public was effectively deceived as to the true safety level of the product. I would prefer if the opposite happened instead.
You could have some kind of independent testing for this. Like there is for fuel consumption.
It would be really easy to game though. And the surface area would be large.
This is nonsense hyperbole. Tesla's stuff has very good engineering. Don't criticize what you don't understand. Further, engineering is always a tradeoff around "good-enough", ask any actual engineer. Perfect is an impossible goal for ANY engineering task.
The everyone you hear from is very different from the everyone I hear from. I dislike Tesla's approach, but you have to admit that a ton of people are on board their hype train.
Ive seen lots of people excited about tesla stock. Some people who were already fans of their car seemed to buy into the hype but I haven’t heard any non-fanboy who has paid any attention to self driving say they expected this to work out.
The problem is that most people are not paying huge attention to self-driving and the marketing will end up causing some people to misjudge the capabilities and get them killed. At which point fanboys will move the goalposts to "it kills less people than humans operating cars", which I don't think is necessarily true depending on how you want to measure that.
A human could drive the car with just the sensor input so in a very real way this is not correct. I agree that Tesla still has a long way to go though.
With just the low dynamic range fixed framerate monocular camera input? I’m not sure I could. Also, everyone knows computers still aren’t as good at vision recognition as humans are.
The difference is that human can do true inference, whereas that's not really attainable even with the best in class ML models. So until we make breakthroughs in AI reasoning, we still need aides like LIDAR in order to workaround current limitations.
If I see a motorcycle go behind a billboard and not come out, I can reason that there is a motorcycle behind a billboard. I assume that's what they mean.
Unfortunately, many people treat Tesla auto-pilot and Waymo's Driver as though they were similar, when Waymo is probably 10x more capable, reliable, robust. I see this as similar to how people equate Alexa and Siri simple voice commands to Google Assistant's deep knowledge.
I think they would be violating the terms of the sale if they don't fix the issues present and roll out all the features described in FSD. Granted, that's no guarantee they will either.
So far they have made significant fixes and rolled out new features. As long as they keep that up they'll eventually deliver something reliable with all the features listed.
As to when, it'll probably be next season. No clue on which year though... :P
Tesla is never going to have a solid self driving system without LiDAR. Radar’s range is limited and cameras are simply not suitable for reliable depth perception and bounding box detection.
Humans don't have LIDAR, they have only 2 eyes and not even radar. Yet they're able to drive pretty well, it's because they have great software to power that limited hardware.
Yes, but Tesla has a bunch of cameras not two human eyes.
Edit: I wonder if a human can learn to drive a Tesla remotely better than another human with radar and multiple camera input by representing them in some useful way.
Remote operation is not yet a viable option for safely operating vehicles on most public roads. Our cellular data networks are simply too unreliable. What happens when a construction crew accidentally cuts the backhaul fiber for the closest base station? Or when the car is driving through a tunnel?
Humans can do a lot of other things such as hear car horns, reason about driver behavior, interpret road signs, and understand casuality of driving off a cliff.
I suspect some aspects of driving a car consumes a majority of a person's brain power, even if it doesn't feel like it.
Ever get into a situation where there's on-ramps and off-ramps on both sides, you've never been on that stretch of road before and there's a lot of traffic? Driver conversation stops. Driver paying attention to the audiobook or podcast playing stops. It takes all your attention.
If that's the case, then maybe we won't get true self-driving until these systems have the processing power of a human brain...
Not processing power but a better world model of physics and reasoning. Humans can understand other drivers intentions and when laws can be bent based on the context to avoid dangerous situations.
> Ever get into a situation where there's on-ramps and off-ramps on both sides, you've never been on that stretch of road before and there's a lot of traffic?
One advantage cars with large sensor suites have over humans is that they can see and process information from all around them. In your specific example, the car can watch both on an off ramps at the same time, isn't phased by the "newness" of the road since other cars have navigated this section of road hundreds of times before it has, and can keep tabs on cars obscured by line of sight due to the car's ability to bounce signals off of vehicles.
ok - here's a better example - skillful drivers on a highway can predict when another driver might change lanes just from the "body language" of that car. It's pretty subtle and I don't think all human drivers can recognize it, much less SOA AI.
Along the same vein, most drivers will become extra vigilant of another car that's driving in a way that's the least bit erratic, giving that car a wide berth.
ENIAC exceeded the processing power of a human brain. Only for solving systems of equations, a task which humans find very difficult, and for which computers are naturally suited. But we would never have built it if this weren't the case.
Chess fell to AI decades before Go did, because computers find Go more difficult, due to the much greater branching factor. Humans are more naturally suited to Go, we can "cheat" by training our built-in shape and pattern recognition subsystems.
It's true that driving a car recruits many adaptive subsystems of a human brain. It's further along the spectrum of 'easy for humans, hard for computers'. But what I want to illustrate is that the 'processing power of a human brain' isn't very well-defined, and isn't the relevant factor for why self-driving has proven remarkably difficult.
It's not clear to me that full self-driving, at the level where there's no steering wheel in the car, can be achieved without general AI. I don't even think it's a useful goal, since the really difficult tasks are things like "park your car in this particular place in a field when arriving at a wedding", rather than situations a human would have to suddenly take over for.
But I have to employ my full concentration to add two five-digit numbers in my head, and while I once knew the trick for multiplication, today I'm quite incapable. Yet there's nothing more trivial for a computer.
> I suspect some aspects of driving a car consumes a majority of a person's brain power, even if it doesn't feel like it.
A decade or so ago I decided to get my motorcycle license, and one of the changes to the testing regime that had come about since I got my car license was that you are required to verbalise any risks you are seeing: drive along saying "oncoming traffic, pedestrian approaching the edge of the footpath, driveway" and so on. It's an interesting thing to do because it's remarkable to understand just how much background processing that you are doing, and how many subtle hints you might be getting - the direction a person is looking and other body language are influencing your assessment of whether someone is likely to wait at the edge of an intersection or just step out; if the lights on a car blink as a person walks towards it, they're probably about to get into it rather than cross the road, and so on.
This seems right about where I expected their real-world capabilities to be today based on their sensor setup and the state of ML for self-driving. i.e. not very good, and nowhere near the Elon hype train or timetable. The real question is how much of this can actually be fixed by software updates alone. Personally, I don't think they can even get to 99% reliability on these use cases, let alone the number of 9s required to be at least as good as the human drivers it's supposed to replace. My money is still solidly on Waymo being the frontrunner for years.
Actually 3D video and pose detection significantly improved this year in research, but it takes a few years to put it into production, and it needs to mature a little bit.
I'm sure both Tesla and Waymo are looking at those research advancements.
Elon Musk is just blatantly falsely advertising FSD by this point. He's trying to sell a technology that a) doesn't exist, and b) has no guarantees of being ready anytime soon. I wouldn't be surprised if people started demanding refunds pretty soon.
I don't think anyone takes Elon's word as final. Even if they did, when you buy a Tesla it has the FSD computer disclaimer on the purchase page, and enabling any of the features in the UI show as "beta" and have giant warnings[0] before you can enable them. Given the beta wording, plus the fact that you already are getting insane value with the promise of continuous over-the-air updates for years (2012 cars still get updates, although the exciting changes in recent updates usually deal with the new hardware old cars don't have), I don't think anyone is going to start asking for refunds.
All of the current gen assisted/self driving tech seems like it's operating in a legal grey area like Uber. Every car manufacturer assumes people are still behind the wheel, and the government is (naively?) going along with it. "By operating this vehicle you agree to..." seems to dissolve them of any responsibility, including around quality of software, and it honestly scares the hell out of me.
I don't know what the fix is for this, the genies out of the bottle here. We won't have reliable self driving cars for years (decades?), and until then we're stuck in this horrible wild west where an off by one error can cause a pileup on the highway.
Let's stop the the assisted/self driving stuff until we have a regulatory framework that can prove the tech works in various conditions, much like seatbelts and collision testing.
We need a digital infrastructure plan. Nationwide upgrade to create a universal platform for autonomous vehicles including a common protocol language, smart signs, smart traffic lights, location beacons, improved highway infrastructure, upgrades laws and more.
We have these accidents and pileups constantly without involvement of assisted driving technology. By your logic we should disallow anything that could ever hurt anybody.
Exactly, why add another way for humans to be negligent? I would absolutely love to see mass transit replace self driving cars as the panacea to our transportation issues.
Exactly. I don't think people realize just how terrible cars at at scaling, whether they are human driven or completely controlled centrally as a fleet. Even parking a single car requires as much square footage as the typical worker requirement for office space. And if you don't have onsite parking, you have double the miles travelled as cars leave the central business district to temporary storage far away, only to come back to pick people up for their commutes back.
The past century of car-based thinking has damaged the US's planning brains. We need to go back to first principles and think about serving the needs and wants of humans, rather than serving the needs of a hugely space inefficient and health-damaging transit mode.
Mass transit doesn't take you to every location and cannot. Cars must exist in some form or another for access to many locations. Mass transit has not replaced all cars in any (non-city-state) country on this planet.
Cars and transit are not mutually exclusive. The more people that use mass transit, the better the experience fire people that still use cars. However we are hampered from enabling better mass transit in the US by those who refuse to let the two coexist and want to force everyone into cars.
There are indeed lots of ways of getting around. Walk, tram, bike, e-scooter, escalator, funicular, metro, schwebebahn, car, train, taxi, rickshaw, boat, bus. The best cities utilize a mix of those that work for them. Rental solutions have exploded in recent years.
You can't pick just one and say it's the best for everything.
"genies out of the bottle"... "wild west where an off by one error can cause a pileup on the highway"... really? More likely we are at the beginning of a long journey towards increasingly autonomous vehicles and we have had very few accidents and fatalities with what is currently on the road.
To be honest I just don't understand how what is shown could be legal -- Having a beta software on something that could easily kill, making the drivers/passengers other road users lab rats and beta-testers.
I do admire what Musk did with SpaceX, but the 'self-driving' aspect of Tesla is just disgusting.
I don't like the way Autopilot and FSD are advertised, and they're very clear that when you buy FSD it's a promise and not a product. What you get are incremental updates and new features hopefully leading to actual FSD.
You are not a beta tester testing out pre-release software, the features you received are considered complete.
The stationary object detection problem that has caused fatalities is industry wide.
The combination of failures that lead to Walter Huang's death however aren't in my opinion.
I think GM's monitoring system is the right solution until FSD is solved, Tesla's steering wheel torque monitoring isn't enough.
Who would've thought that relying on a statistical model, whose many thousands of parameters are not understand, would lead to such an erratic end product?
What does HN think of the Ghost self driving product? Is it real or a scam? They claim to be able to add limited level 3 autonomous operation to most late model cars.
Highway seems simpler, but it isn't. Just different. Being able to predict the necessary time ahead with limited sensors at high speeds is as hard as predicting a high number of participants going slower in the city.
Personally I think level 3 should be banned (can't find it, but I think Volvo blogged about it years ago), because the system just cannot detect that it will fail in some seconds and give the driver enough time to react.
This well known Tesla fatality [0] illustrates my point. Unfortunately. The model built up by the system was free (enough) of contradictions and the reaction of speeding up right into that barrier was a totally correct one. From the system's "point of view". Neither sensor- nor interpretation-wise we are close enough to humans with those systems if the operating domain is not extremely limited. All general highway systems are operating in a too open environment to be safe (that is, we won't see real L3 or higher systems there in the near future). Sorry, Ghost. Ask other people who have been there.
By the way, Google's tiny slow cars driving in the valley only were exactly the strategy to limit the operational domain. Once they opened up that domain, they started to struggle again with the same problems as all others (admittedly on another level of mastering it though)...
Would keeping Autopilot 'dumb' be a good way to keep users engaged and continue training the network? While Tesla keeps and improves a far more capable version of autopilot internally? I think autopilot is a huge legal burden for Tesla, and I would imagine being careful is a high priority. Once Tesla starts marketing full self driving capability, it will open a gateway of legal troubles that they'll need to deal with once the system is fully capable and can no longer hide behind the 'beta' word. They've put out videos showing autopilot doing complex things and I think that's a pre-release version of autopilot they were using. IDK though, just wondering.
> Traffic Light and Stop Sign Control is designed to come to a complete stop at all stoplights, even when they are green, unless the driver overrides the system. We found several problems with this system, including the basic idea that it goes against normal driving practice for a car to start slowing to a stop for a green light. At times, it also drove through stop signs, slammed on the brakes for yield signs even when the merge was clear, and stopped at every exit while going around a traffic circle.
(emphasis mine)
Is that really how this feature is intended to work?
reply