Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> I don't understand how something this broken is allowed to operate on public roads.

Also out in a rural area. Running out to pickup lunch a few minutes ago, a young man flipped their old pickup truck on its side in an intersection, having hit the median for some reason. I too don't understand how humans are allowed to operate on public roads. Most of them are terrible at it. About 35k people a year die in motor vehicle incidents [1], and millions more are injured [2]. Total deaths while Tesla Autopilot was active is 7 [3].

I believe the argument is the software will improve to eventually be as good or better than humans, and I have a hard time not believing that, not because the software is good but because we are very bad in aggregate.

[1] https://www.iihs.org/topics/fatality-statistics/detail/state...

[2] https://www.cdc.gov/winnablebattles/report/motor.html

[3] https://www.tesladeaths.com/



view as:

Could have been equipment failure on the truck - a tie rod end failing or such will create some interesting behaviors.

> I believe the argument is the software will improve to eventually be as good or better than humans, and I have a hard time not believing that.

I find it easy to believe that software won't manage to deal well with all the weird things that reality can throw at a car, because we suck at software as humans in the general case. It's just that in most environments, those failures aren't a big deal, just retry the API call.

Humans can write very good software. The Space Shuttle engineering group wrote some damned fine software. They look literally nothing like the #YOLO coding that makes up most of Silicon Valley, and deal with a far, far more constrained environment as well than a typical public road.

Self driving cars are simply the most visible display of the standard SV arrogance - that humans are nothing but a couple crappy cameras and some neural network mush, and, besides, we know code - how hard can it be? That approach to solving reality fails quite regularly.


Is flying a helicopter on Mars arrogance? Launching reusable space launch vehicles that boost back to the landing site arrogance? I don't believe so. These are engineering challenges to be surmounted, just as safe robotic vehicles are a challenge, and its reasonable (I'd argue) for us as a species to drive towards solutions (no pun intended).

Silicon Valley isn't going to suddenly become less arrogant, but that doesn't mean the problems they attempt to solve don't need solving. Incentives are important to coax innovation in a way that balances life safety with progress, and I concede the incentives in place likely need improvement.


Aviation and space are a very, very different problem space from surface street navigation, because they rely, almost entirely, on "nothing else being there."

We've had good automation in aviation for decades. It doesn't handle anything else in the way very well at all, and while there's some traffic conflict avoidance stuff, it's a violent "Oh crap!" style response, nothing you actually want to get anywhere close to triggering. Automated approaches down to minimums require good equipment at the airport as well as on the airframe, and if there's a truck on the runway, well. Shouldn't have been there.

Same thing for landing boosters. There's nothing it really has to look at and understand that it can't easily get from some basic sensor data - speed, attitude, location, etc. It's an interesting challenge, certainly, but it's of a very different form from understanding all the weird stuff that happens in a complex, messy, uncontrolled 3D ground environment.

Self driving on a closed track is a perfectly well solved problem. Self driving in an open world is clearly very far from solved.


I don't disagree with you. I believe we're arguing between "can't be solved" versus "it's going to take a long time to solve." I'm stating I fall in the latter camp, and advocating for stronger regulation and investment in the space.

Show me Self driving on a closed track with snow or rain please.


Its an interesting video but its an edited video.

The company website mentions the system was able to negotiate the challenges of the driving conditions, looks like an euphemism...

Very few details, no scientific publications I could find on their Asimov system on a quick search, not sure if this is a technological breakthrough or a fine tuning of existing processes and methods.

Because if its a fine tuning of current algorithms not sure how long they can push this. They are relying on Lidar (and other sensors) but even as of last year, it seems most teams already realized Lidar would not be the solution for Snow, and were now pushing Ground Penetrating Radar ( sounds expensive...)

"Autonomous Cars Struggle in Snow, but MIT Has a Solution for That" https://www.caranddriver.com/news/a31098296/autonomous-cars-...

Even Tesla, already realized its not just a sensor problem, solving self-driving needs a higher level algo than can put the different sensors, in context of situational awareness. Note that in no any other way I would consider Tesla an example to follow ;-) And do not think they are any closer to getting a working system. Some of the statements show at least a second level understanding of what is required. Sensors are a means to it. Its about situational awareness but also inference.

"LIDAR is a fool’s errand… and anyone relying on LIDAR is doomed. — Elon Musk"

https://youtu.be/HM23sjhtk4Q


Worth remembering that Tesla posted this “self driving” video in 2016. Editing can do amazing things. https://www.tesla.com/autopilot

I’m actually shocked it’s on the website today, first frame says the driver is only there for legal purposes.


> There's nothing it really has to look at and understand that it can't easily get from some basic sensor data - speed, attitude, location, etc.

Self landing rockets are simple, it's not rocket science, duh! Everyone and my grandma has one.


In terms of "understanding the environment around you such that you can land a booster stage," it's not a particularly hard problem. The challenges are about designing a booster stage that can handle the flipping and reentry, then figuring out the details of how to stick the landing with several times your empty weight as your minimum thrust.

"Where am I, and what's between me and my destination?" isn't the hard part, as it is with surface driving.


Propulsive landing has been achieved routinely and with perfect precision since the 60s - Apollo's lunar modules, Lunar surveyor, Lunokhod rovers.

The question has never been about the feasibility of landing the booster stages - what has been questioned is whether it's worth doing. The fuel used up during landing is fuel that cannot propel the payload. The landing might fail. The effects of thermal and material fatigue are not well understood. The transportation, refurbishing and QA are unlikely to be cheap anyway.


>Is flying a helicopter on Mars arrogance?

No, but I am not so sure it's anything more than a stunt with a high chance of failure.

>Launching reusable space launch vehicles that boost back to the landing site arrogance?

Maybe, but it's mostly a PR stunt from my point of view.

Just my 2 cents.


> Humans can write very good software. The Space Shuttle engineering group wrote some damned fine software. They look literally nothing like the #YOLO coding that makes up most of Silicon Valley, and deal with a far, far more constrained environment as well than a typical public road.

Safety-critical software like that used in the space shuttle is incredible expensive for the level of complexity involved (which is not very high, compared to other projects). A self-driving car is probably one of the most complex software projects ever attempted. If you were to apply the same techniques as the shuttle to achieve self-driving you would literally never finish (not even the tech giants have enough money to do this). So to achieve this you not only need to solve the very difficult initial problems you also need to come up with a way of getting extreme reliability in a much more efficient way than anyone has achieved before.


And yet such self-driving car is going to kill way more people than the Space Shuttle ever did. Oh, the irony.

dinddingding.

Self Driving cars (Tesla; who is faaaar from it, among others) will kill people. But people are shitty drivers on their own; has to start somewhere and Tesla is the first to get anything close to this level in the hands of the general population (kind of, beta program is still limited in release))


Sure, maybe, but why should my or my family's life be put at risk until they figure it out?

Honestly; i have a Model 3 with FSD - Despite charging issues; I'd rather road trip with it than my wifes (nicer) car because the AutoPilot is better the highway than what else is out there. An idiot crawling into the backseat with AP on is dangerous; but AP in general makes me feel much more comfortable on a highway trip; and comfortable == less chance of dozing off and not noticing the oncoming construction zone.

Love traffic aware cruise control. I'm never getting a car that doesn't have it.

This runs into the same problem that led to the regulation of medicine: people can put all kinds of supposed remedies out there which may or may not do anything at all to solve the problem and may have worse consequences than the thing they're meant to cure.

Cars aren’t safe and robots don’t fix it.

> I believe the argument is the software will improve to eventually be as good or better than humans, and I have a hard time not believing that, not because the software is good but because we are very bad in aggregate.

But logically this doesn't really follow, does it? That because humans are not capable of doing something without errors a machine is necessarily capable of doing it better? Your argument would be more compelling if Tesla Autopilot logged anything like the number of miles in the variety of conditions that human drivers do. Since it doesn't, it seems like saying that the climate of Antarctica is more hospitable than that of British Colombia, because fewer people have died this year of weather-related causes in the former than the latter.


Yeah in the parent post, how many miles did that guy drive before flipping his truck? In all of these videos, we are seeing it disengage multiple times within a couple of miles. Nobody is that bad of a driver that they crash every time they go out and drive.

We accept shit drivers. We don't accept companies selling technologies that calls itself "Full self driving" (witha beta disclaimer or not) that hits concrete pillars. This isn't hard. It's not a mathematical tradeoff with "but what about if it's shit, but on average it's better (causes fewer accidents) than humans?". I don't care. I accept the current level of human driving skill. People drive tired or poorly at their own risk, and that's what makes ME accept venturing into traffic with them. They have the same physical skin in the game as I have.

Legal | privacy