Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Self-driving vehicles should be implemented with the same care Apple has given Touch ID and Face ID in regards to protecting sensitive data.

Having a central database capable of being scraped and process to determine where any person is at any given time is a non-starter. Care needs to be taken to scrub all extraneous data from the fleet's network.



sort by: page size:

The problem is that total separation is difficult to achieve with the requirements being placed on these vehicles. Sure, whatever is playing your favourite music tracks over the speakers probably doesn't need to know anything about steering and acceleration. However, your self-driving software is going to have a tough time getting your car to a location it doesn't know exists because its onboard navigation maps predate the existence of that building, or planning a route that avoids an accident it doesn't know about because it has no real-time information about road closures.

Self-driving vehicle development has so many difficult problems to solve. The technology itself is extremely difficult to create and relies on this extremely complex compromise where vehicles are allowed (and are) operated by code but ultimately babysat by drivers.

I think self-driving technology is worth the risk and worth the large amount of property damage and human injury that comes with it. However, this is a societal issue and there needs to be some sort of referendum on this. While many parallels can be drawn with space travel, those projects were tightly contained and included individuals who were specifically trained and informed of the potential dangers--there is no way to really do this at scale for self-driving tech which necessarily demands an expansive ecosystem that can't be tightly controlled nor can consent be inferred from the unknown number of people involved in this experiment.


Its not needed, its a technology already eclipsed by what manufacturers and even Google are doing. Self driving cars rely on various sensors; RADAR and laser to name two; to protect from collisions and the like. They do all this without communicating with the vehicles around them.

Simply put, smart cars don't need to input of other vehicles to know how to perform safely, in fact not relying on other cars makes them safer as they cannot be spoofed nor act on erroneous information.

Then of course there is that big privacy issue that this proposal tries to skip over.


The problem is then you are driving an "IoT car". One hack of a "central server", one lying or broken set of sensors, one crack or loss of encryption keys, and someone has control over a huge number of cars.

Plus you run into issues of what happens if the central server goes down, connection issues, new or changing scenery or tons of other little issues.

IMO companies working on self driving are on the right track here. That car should be able to fully self drive alongside humans without a single byte of data going in or out of the car.


These vehicles will likely have plenty of surveillance surrounding them - not only for instances of heists, but for evidence to be used in insurance claims and police/fire investigations for collisions and other mishaps. Detracting from self-driving vehicles is puzzling to me, though. I would rather see the tech succeed and would like to see us get there as soon as possible. We should accept that we might have to adapt to them a little, and that our infrastructure may need to change in some ways to better accommodate AI. It's insane to me that we aren't laying sensors, reflectors, conductors, or whatever else would make self-driving vehicles safer and more effective in our roadways already.

I agree completely. It's a very difficult problem from a technical perspective, and from a systems perspective, we've got untrained operators who can't even stay off their phones in a non-self-driving car. (Not high-horsing it here; I'm as guilty of this as anyone.) Frankly I'll be amazed if anyone can get this to actually work without significant changes to the total system. Right now self-driving car folks are working in isolation - they're only working on the car - and I just don't think it's going to happen until everyone else in the system gets involved.

After listening to Avi Rubin's Ted talk[0] and seeing articles like the Hacker's remotely kill jeep on Highway[1], I am less bullish on self-driving cars. Mobile and regular OSs are vulnerable to attack, except here your data as well as your safety is at risk. As well as making these cars orders of magnitude safer (in terms of navigation), they will need to be orders of magnitude harder to hack and take control of.

A few high profile hacks leading to crashes and fatalities will be enough to turn a large swathe of the population off autopilot technology, which they are already skeptical of.

[0]http://www.ted.com/talks/avi_rubin_all_your_devices_can_be_h... [1]http://www.wired.com/2015/07/hackers-remotely-kill-jeep-high...


The issue is that self driving needs to be better than human drivers by a considerable margin. The public will not tolerate an AI driver that makes the same mistakes as humans, they expect better. To do this it will need access to sensors that humans don't have

I'm saying any self driving where the human has to monitor how the computer is doing and be ready to take over is probabbly a non starter as human nature likely dictates that they will probabbly not do a worse job at that than just driving.

Better technology can't patch human issues unless you're able to remove the human. Self driving cars are still not here.

Many people think that this is an issue that needs to be solved before self-driving vehicles are deployed at scale. We need to wake up and realize the degree to which the vehicles we drive TODAY are vulnerable to remote exploitation. If the vehicle has an internet connection, there's likely a vulnerability somewhere. An attacker may not be able to drive it to a specific location to crash, but they can disable critical safety systems and crash it all the same.

Well, as many futurologists say... a self-driving car (or drive-assisting) doesn't have to be perfect. It merely needs to be better than a human driver. Data seems to suggest that that hurdle isn't that high.

Then again, I do share your concern about hackability. Just a single grand catastrophe would turn the scales over for a long time. And nothing suggests that car software is of high quality currently.

>Do you drive while text messaging

Amazingly many people do, even after being told that it's about as dangerous as driving drunk.


The real problem with self-driving cars is the car to human handoff. Over the long term it's incredible unlikely that humans will be any good at all at remaining aware and 'up to speed' on the current situation in the event that the car needs to give control back to the driver due to road conditions, hardware failure, or sudden situation that it cannot contend with.

I think if we want to solve large scale intersectional problems like traffic (where lots of individuals with their own goals and own variables intersect) we will need to give up SOME of our privacy in terms of where we are headed (just heading), speed, car model/make or some kind of capabilities estimate... This could result in a smarter traffic flow, road design, or in Google's case powering safer self-driving cars.

I'm up for any of those outcomes and am also concerned about giving up data unnecessarily. I think the crux of the article besides hyping up self-driving simulated driver's better decision making is that roads aren't designed as safely as they could be and some minor inconvenience to the user (time, some data given up) could save lots of lives.


This is the obvious stopgap measure so you can get to market when your self-driving software is very very good but not quite perfect. You use the data from these situations to improve the software, requiring fewer and fewer manual interventions until you only need human assistance in truly exceptional situations like physical damage to the sensors.

I wouldn't be surprised to see an almost-entirely-self-driving car service in certain areas in the nearish future. As long as it's 100% safe and only requires assistance rarely, it makes sense to start building out the infrastructure sooner rather than later.


Infrastructure, machine learning, sensors, and other components are there. We already got self driving cars.

Now human adoption and acceptance is the problem.


Right now, per mile, self-driving cars are much less safe than humans. As has been noted before though, we don't have that many miles of data on self-driving cars, and the tech is still quite young and there's (probably) a lot of room for improvement.

Reminds me of the same issues with self driving. Seems like we need a completely different approach to solve these class of problems.

Talk about pointless edge case. It's so incredibly rare, you can just have the car protect the passenger and that's fine. My biggest concern about self driving cars is the ability to mess with the image recognition software. How hard would it be to flash a piece of material at a lidar/radar and confuse it that so much that it either crashes / goes into DOS / or thinks a minivan full of babies is cruising down the road towards it.
next

Legal | privacy