States just need to start mandating LIDAR for any level of autonomous certification, in the same way we finally figured out seatbelts were the minimum level of safety for passenger retention.
Mandating how a certain outcome should be achieved generally is not the best (efficient, quickest, cheapest, etc.) way in getting to that outcome. Autonomous certification should have its safety requirements for each level and require the vehicles meet that standard.
Autonomous vehicles have the potential to reduce deaths and injuries dramatically and increasing the time to get there will cause thousands to millions of more deaths and an order of magnitude more injuries, not to mention property damage.
This isn't necessarily that strict of a regulation. They could require the sensor be on board but not actually regulate any of its usage. It could be a scenario like the Model 3's rear view mirror camera, where it wasn't used at all for the first two or so years and then later got a software update that started interacting with it. The important part is that the LIDAR unit is physically wired in to the system so that they can start using it in software whenever they feel like it's necessary.
I don't know. In general I think you're going to need some balance between requiring certain outcomes and requiring certain specific implementation details. It might be possible to make an automobile out of sufficient thick and squishy parts that it could pass reasonable crash safety tests without having a seat belt, but it's still not ridiculous for regulations to require that automobiles have seat belts.
(I have no idea where lidar for self-driving cars falls within that balance, although I can't help but notice that human drivers do not utilize lidar.)
Well, lidar is just a stand-in for the depth perception humans already do. It is absolutely valid for regulators to require that the systems on an autonomous car do depth perception, and provide a standard to prove that the system works well.
I suspect that right now, ML based approaches to depth perception would not pass the test to standards we would like, in practice requiring lidar. But, this would also provide an option to switch to an ML based gets good enough.
This is a common misperception. You cannot estimate the shape of simple things in low light. Even humans are susceptible to optical illusions and are effectively blind during certain times of day (try driving in the direction of the sun during sun rise/set). Lidar is not susceptible to any of these things.
Similarly ML will never solve low light or direct light situations, that is a physical limitation of cameras.
Those cameras are fixed whereas eyeballs have multiple degrees of movement via the eye socket and skull mounting apparatus allowing very sensitive parallax measurements. It's really not the same at all.
Autonomous vehicles do of course have the potential to reduce deaths but there are lots of examples of this being the preferred method of regulation in the auto industry. For example if you want to sell a motorcycle in the US it has to have a kill switch in a specific place on the right handlebar. You don't (as a manufacturer) get to innovate about this because it's a safety feature and a first responder needs to be able to locate the kill switch so it always has to be in the same exact place so they can always find it. If you want to sell a car in Europe (I believe) it has to have seatbelts, ABS and a catalytic converter that meets a particular specification etc etc.
Not necessarily mandating LIDAR but at least mandating functional redundancy for self-driving / driving assistance. I'm guessing that won't happen until the deaths start piling up though.
Is LIDAR not susceptible to this? I guess it might be more robust against random hiccoughs (though I have no basis for this whatsoever), but is it also protected against malice?
LIDAR maps out the world around you in three dimensions, so with that data, there's no way it would mistake "a 'Stop' sign hidden within a fast food commercial [...] only flashing on-screen for a fraction of a second" (article) for an actual stop sign
As would a human given a sufficiently official-looking stop sign. In fact I think it's actually impossible for an automated car or human without figuring out the "troublemaker"'s identity whether the stop sign is fake or not. And it's not always possible to pin down someone's identity (if the person is wearing a brightly colored construction vest what do you do?).
There's a lot of construction in our neighborhood right now which made me start thinking about that possibility. It kind of sounds like something out of a heist movie (or taken to the extreme, Die Hard With a Vengance?), but really, who would know and care enough to ask the questions to notice something like that?
Well it probably would stop working if you confuse it with a whole bunch of confetti of reflective material or similar. Although I suppose that would work against humans as well.
Still I wonder if it has failure modes that humans lack, which could result in situations where it does inexplicable things.
LIDAR can't reliably see signs from far away and definitely can't read them. It's not clear that there would be any usable signal to act on, especially not one of sufficient confidence to ignore a giant red stop sign in the camera.
This isn't a perception issue. It's an insufficient model for the world, and the software is obviously biased towards obeying critical traffic controls at lower confidence levels. It would be interesting to see what would happen if a (higher) speed limit sign was flashed in the same way.
It's actually comical in a way that we possess the technology to build robots that can operate heavy machinery and practically interpret the world the same way we do, but we still have to worry about people holding up plastic signs.
It makes me think that the first real wide scale self driving vehicle deployment will have to come from a nation state rather than a corporation; the easiest way to solve all of these problems is by putting heavy investments into smart road infrastructure such that the car depends on sensors embedded in the signs or the asphalt rather than any camera feed. A school bus could just have a transmitter to indicate whether its lights are on or not; construction workers could be given road signs that transmit similar signals or better yet use a portal to construct a virtual geofence around wherever the construction is happening before it begins. These are all easily solvable problems in today's world; what is missing is the political capital to invest in these solutions despite the insane ROI.
Ideally yes, but the first time a kid gets killed because the school bus's transmitter fails there will be hell to pay. Will that be the fault of the passengers in the car? Or the car's manufacturer? The school bus's manufacturer?
This infrastructure will need distributed two-phase commit. For safety-critical issues like school buses, eventual consensus won't be good enough. And then you'll need an infrastructure that detects when an older or simply defective car is present but not participating in the protocol. These problems are all solvable, but they're by no means trivial.
Tesla cars are far from being autonomous. Autopilot (including the so called "full self driving") is level 2, meaning that the driver has to pay attention at all times.
From a driver responsibility point of view, it is a fancy cruise control, there is no question about autonomous certification because the car isn't autonomous.
If one day, Tesla thinks it is OK for drivers not to pay attention and wants to make it legal, that's when the question about what kind of tech is reasonable will be asked. But for now, it looks like we are far from it. There is a huge gap between "works well most of the time" and full autonomy.
I've said it for a long time but: Tesla implementing level 2 is dangerous and irresponsible. Calling it "autopilot" is even worse (and has been ruled as misleading in at least German courts).
If a human has to sit there and monitor the car without interacting with it, then we shouldn't allow it on the roads. That means skipping level 2 and probably level 3. Go the Waymo route and go straight to full self driving cars.
This day came and passed years ago. It's evident from their marketing that there is no attempt to discern fancy cruise control from "zomfg your car will drive you around in a year!!"
Whether this is false advertising or even just moral is up in the air, but Tesla is making no attempts at educating consumers about the limits of their autonomy other than the bare minimum (yeah you technically have to keep both hands on the wheel, but that mechanism is easily defeated and you can even buy clips on Amazon that do it for you). I'm pretty sure Elon even liked a tweet about a Pornhub video where someone banged a model on the highway with autopilot. Which is an excellent day for PR and a disastrous day for legal.
The point being, sure, academically they are a far way away, but they are trying to convince consumers Level 5 is just around the corner.
I'm coming around to agreeing with you about the deceptiveness of Tesla's marketing of Autopilot. I recently bought a new car from a non-luxury brand that's quite popular in the US, and a few days after buying it I was a bit surprised to realize that its "dynamic cruise control" and "lane keep assist" features amount to roughly the exact same thing as Tesla's current Autopilot offerings. I am still shocked at how undersold these features were, both by the dealers and the car's literature. I was driving for several days before I realized that, in addition the the dynamic cruise control I was familiar with, a simple button press could get the car to steer itself to stay in its lane.
Driving on the freeway is fairly easy right? Stay in your lane, keep distance. Of course there was that one guy who crashed into a divider on autopilot and died. But I could see it being not-that-unsafe to have your attention elsewhere.
Why? Cameras can work just as well as your vision. Humans manage just fine and don't have LIDAR. Just because it causes the cars to slow down now doesn't mean it will in the future.
It's a little strange to make a direct comparison between human cognition and computer vision given that the latter lacks the true ability to understand what it sees. You can train the computer for these edge cases, but humans largely do not have them in the first place because we're capable of formulating highly accurate understandings of what we see.
You can of course pick out discrete examples of humans failing tasks like these, but entire classes of computers routinely fail at things like this.
Your vision is backed by a neural network with around 80 billion neurons implementing a general learning and intelligence model we don't have the first clue how to replicate. We're a bit further away from "Cameras working just as well as your vision" than I think you're anticipating.
> We're a bit further away from "Cameras working just as well as your vision" than I think you're anticipating.
Well surely that's what we should be testing. Is a self-driving car system safer than a human, or not? You're correct that self-driving car systems lack a "general learning and intelligence model," but they also have several obvious advantages over humans, like potentially better-positioned cameras, a more reliable "attention" system, faster reaction times, potentially more "experience" with different road conditions and scenarios, etc.
We also have a pretty large part of our brain dedicated to image recognition, in particular we are excellent at reconstructing a 3D environment from 2D images.
Furthermore, everything about the road is designed to help us humans. The size, color, placement, etc... of road signs and markings is specially designed for human brains to process.
As for the hardware, our eyes have very good resolution, field of view and an unmatched dynamic range. Far better than the stuff in Teslas.
So I'm not saying that it is impossible for computer vision to get away with just using cameras, but it is competing with what our brain does best. So a little help in a form of a LIDAR isn't too much IMHO.
Cameras lack depth perception, which the brain very cunningly provides for humans at no additional cost.
Humans do get very clunky and slow when that depth perception is taken away. One example is night vision goggles which basically just project a flat screen onto your eyeballs. Many people report that they become susceptible to tripping, falling, hitting objects, or just straight up walking into trees with NVGs, in part because the loss of depth perception takes such a toll on their brain that they can't adapt in time. It takes hours if not days to fully adapt to NVGs, and even then you can't operate as quickly as you can in daylight unless you practice consistently (think SWAT or Navy Seals type training).
Cameras lack any perception; perception is a function of brains.
But a single camera with appropriate processing can provide a wide array of depth cues (stereoscopy adds more.)
> Many people report that they become susceptible to tripping, falling, hitting objects, or just straight up walking into trees with NVGs, in part because the loss of depth perception takes such a toll on their brain that they can't adapt in time.
While loss of depth cues plays some role here, a much bigger factor, AIUI, is the narrow field of view. “Panoramic” NVGs have around 100° horizontal and 40° vertical field of view (non-panoramic NVGs often have around a 40° horizontal FoV), people have around (with eye but not head movement) a 210° horizontal and 120° vertical field of view. Looking directly ahead with even panoramic NVGs as you normally would without them, you literally cannot see lots of things, especially tripping hazards, and objects to your sides that you might laterally move into to avoid the things you can see in front of you.
Human binocular depth perception only works at distances of less than 30 feet, which is not very important for most driving. Everything beyond that is from other cues such as motion parallax, relative size, lighting and shading, and perspective.
The reason people become clumsy when using night vision devices is because they have a tiny field of view that only moves when your head moves. You can close one eye and walk around just fine. But if you attach two toilet paper tubes to your face, you'll have trouble balancing and avoiding obstacles. The clumsiness is caused by the lack of peripheral vision and the inability to quickly glance (since the only way to change what you see is to turn your head).
While I agree with you on mandating it, a LIDAR "only" gives you a 3D point cloud.
Great for detecting if an object is in your way - but it cannot read a stop sign, you still need a camera system for that - so they still have to harden it against this hack (and who knows how many to come).
Wouldn't that "hack" just be as easy as sticking an actual stop sign somewhere it's not supposed to be, or removing a real stop sign? That would obviously fool a human driver just as much, and it has actually happened, and killed people.
> so they still have to harden it against this hack
True, but LIDAR should make that detection more robust.
For example, if a stop sign is on the same plane as the object that surrounds it, that's a strong signal that it is not a real stop sign but rather a projected or displayed one.
That's much harder to detect with computer vision alone, especially if they're dealing with hackers who can test it with a real car and continually iterate on it to find different ways to trick it.
For example, if a stop sign is on the same plane as the object that surrounds it, that's a strong signal that it is not a real stop sign but rather a projected or displayed one.
It's conceivable that in denser or older areas, signs are sometimes placed on structures because there isn't room for a pole.
Not being sarcastic but how would LIDAR help here ?
Maybe .... before acting on a Stop sign sensed by the cameras the control system would confirm, via LIDAR, there is a post with a plate mounted on it in a position which corresponds to where the camera thought it saw a Stop sign ?
Can LIDAR do that level of sensing, ie unambiguously find a metal post/plate out of all the other stuff which might be in front of the car ?
And assuming my guess about how LIDAR would help is correct what happens when an obstruction between the car and the Stop sign means that LIDAR can't 'see' the post but the cameras can see the sign ?
Hacked billboards are hard to come by. We've already seen signs that autopilot can be tricked by malicious lines on a road.
Not that long 'til somebody points a projector at the ground in front of a Tesla and it crosses into oncoming traffic. You could even mount that projector on a panel van, and use image recognition to target Teslas and otherwise stay inactive
That's what the article says. Hacked billboards were one method, drones with a projector were another, and the image only had to be shown for less than a second to cause the car to react.
Interestingly the article says it does not work on humans because it is just a image shown for a short time that people do not register the images.
Seem simple to handle, as the car computer see an image, it checks if the image show up in a second scan. Since the computer scans are very fast they should not effect to computer processing/reacting to driving the car.
> projector at the ground in front of a Tesla and it crosses into oncoming traffic
I don't understand why people come up with all these ridiculous "what if" ideas against autonomous cars as if it's anything new.
It's already exceedingly easy to foul up a human driver. Stand next to a freeway and hold up a stop sign. Drop a foam mannequin from a bridge over the road. Paint some bogus lane lines. Shine a laser at their face.
Any of the above will cause accidents with humans, and you'll be arrested and charged with a felony. Deliberate attempts to compromise autonomous car systems carrying humans inside should be treated no differently.
I'd stop in most of those situations. Except for the bogus painted lines -- I'd see that they were trying to direct me into oncoming traffic and respond accordingly.
The panel van (or drone) attack is highly targeted, and doesn't stick around like paint does.
These aren't just "ridiculous what ifs," they're a rather plausible method for freshly unemployed Luddites.
Do you see anyone going around with a massive LED array on the back of their trucks to blind (targeted) human drivers? Because that's even easier than floating around a panel with a specially crafted video feed. Yet I've never heard someone do it.
Sure, leading a self-driving car astray is more of an accomplishment than blinding a human, but comparing crimes by how entertaining they are to a bored unemployed DIY sociopath does not seem very reasonable to me.
Perhaps because the promise of autonomy is that you will be able to let go, close your eyes, take a nap, or read a book. That is really hard to do if you have even a 0.0001% chance of hitting something and dying because your car didn't do something right.
Sure, your chances of getting into an accident while driving your own car are higher, and some dummy could throw a turkey onto your windshield and kill you ...etc, but humans don't think in probabilities (well, outside of Hacker News humans), and while driving their on car humans have an illusion of control, when the car is driving itself and you are sleeping, that control is gone.
> That is really hard to do if you have even a 0.0001% chance of hitting something and dying because your car didn't do something right.
Have you ever flown in a plane before? Some people are more nervous than others about giving up control but billions of people still do it because the benefits vastly outweigh slight anxiousness about that billion to one chance. People will not want to drive in an autonomous vehicle right up until they see they can give up their $40k car and hail a driverless taxi to go anywhere for pennies on the dollar.
I think people who are born into it will be fine with it. You probably get inside elevators without second thoughts. If you are of the age of the typical HN reader, they've been L4 autonomous since you were born, so you don't think about it.
With planes I think most people realize that the trained pilot is a better pilot than they are, so they feel more at ease giving up control to someone that is better than them. People just need to see that similarly, software drives cars better than humans.
This is clever, and Tesla's software is awful, but anyone doing this for real, with intent to divert cars from the road etc. would be committing a pretty serious crime.
Both good points - so this kind of crime would likely be indistinguishable from a random, terrifying software failure, with the blame falling on the car maker.
I mean it is a failing of the manufacturer on some level. Not saying someone intentionally exploiting the car isn't also at fault, but I also don't think it's unrealistic to hold the manufacturer accountable as well.
Tesla logs so much data and stores it and has used it to defend itself in various legal / PR matters. I wonder if they would catch it if someone did this. Or maybe suddenly they wouldn't have that data lol
Very often I see these tiny articles in local newspapers, 15 sentences, that go something like 'The driver lost control of their vehicle and died in the crash'.
I've had some terrifying software problems in Audis, including random shutdowns while driving due to faulty sensors, how often do you think someone who 'lost control of their car', which implies that there was human error, died because their car's electronics or software were faulty?
Yeah! I'm sure that's happened in software in less sophisticated, but maybe more understandable ways. Lane-keeping & auto-braking software - which loads of cars have - is absolutely terrifying to think about, and I'd be fascinated to see these attacked from a research POV.
My old Celica tried to murder me on the M62 and Toyota denied the problem for years (the gas pedals got stuck under the floor mats). And that was just a really obvious (in retrospect) design flaw - for how long could a car maker viably deny safety problems in their software?
Shouting fire in a movie theater is not a good example of unprotected speech, and as such it's not a good place to start reasoning from in deciding of other speech is unprotected. Otherwise, you start banning core political speech like advocacy against the draft, just like the since-overturned case that bad example comes from.
The fact that that example became well-known due to the awful decision in Schenck v. United States does not mean that it's not a good example of speech that ought not be protected. Speech that is false and intentionally causes a panic should not be protected.
> Speech that is false and intentionally causes a panic should not be protected.
What is true or false isn't always clear though, even if there's a popular or official consensus.
The point of having a public forum is to give new ideas a chance, as unheard of or unpopular as they may be. Of course, with that you get the risk of misinformation as well.
As a society we must choose how much risk we're willing to tolerate, and really consider if the the downsides of the level of free speech we have today are really so much different than what we've historically dealt with.
>The point of having a public forum is to give new ideas a chance, as unheard of or unpopular as they may be. Of course, with that you get the risk of misinformation as well.
So since the ideas behind scientific racism, the dark enlightenment, white supremacy, Nazism and anti-semitism, and any prejudice based on conservative or religious ideals (such as sexual or gender discrimination or anti LGBT prejudice) are neither new nor unheard of, it's OK to stop giving them a chance?
I mean... flat earth? Do we really need to make sure the flat earthers have as wide a platform as they want just in case they turn out to have been right all along? Do we really need to vigorously debate every new iteration of "maybe the Jews do deserve to be driven into the sea" or "maybe eugenics is a good idea" that crawls out from under the rocks of society every few years? Do we need to keep the flame of "Bill Gates created COVID as a pretext to tag everyone with mind-control chips to make it easier to harvest adrenochrome for satanic blood magick" alive for future generations to ponder the possible wisdom of?
Maybe we can agree that, while there may be subtle cases where legitimate scientific or political ideas which run afoul of the status quo and public acceptance need protection in a free market of ideas, most of what's actually being defended in these discussions is regressive garbage that can be tossed back into the wastebin of history from whence it came, with nothing of value lost?
At what time do we decide we have reached the "TRUTH" and decide to stop giving ideas a chance?
If we were to take your argument back in time and apply it, what would we see?
"I mean...heliocentricism, really? Do we have to give Copernicus slosh any thought, we all know that God created the earth and then the heavens so it is only natural that they orbit the earth."
On the front page at the same time is a story about psilocybin and "What if a Pill Can Change Your Politics or Religious Beliefs?".
Right now we accept that sexual orientation is not a choice. What if in the future we find that exposure to chemical/compounds can cause people to become more gender norm conforming? What about less conforming to historical norms?
How would you propose we handle those possible futures? A drug that turns you gay is straight out of Alex Jones. A drug that turns men straight is straight out of conversion therapy.
> At what time do we decide we have reached the "TRUTH" and decide to stop giving ideas a chance?
OK, so I see you've decided to ignore the context of my comment, pretend I was referring to some absolute "TRUTH" then strawman it.
Yet everything you've listed, being scientific and therefore part of a framework in which those ideas could be proven or disproven, are part of what I mentioned at the end of my article, which I will quote verbatim for you: cases where legitimate scientific or political ideas which run afoul of the status quo and public acceptance need protection in a free market of ideas and thus explicitly not germane to my point.
How do I suppose we handle science? Science. But then nothing I listed was actually science, nor is there any valid controversy around any of them, as science has invalidated practically all of it already. There is no universe in which it turns out Hitler was right all along if only we'd listened.
So let me flip the script on you - how long do you believe we should let ideas like white supremacy, anti-semitism, lunatic conspiracy theories like QAnon, anti-vaxx etc remain in the marketplace?
Or even keeping it within the bounds of science - do you believe we should still be arguing the merits of the luminiferous aether and miasma theory, or "teaching the controversy" around creationism?
While the veracity of a statement is important, the problem is not really about "misinformation". Technologists and intellectuals are used to viewing speech as an form of information transfer. However, speech can also be performative[1]; saying "not-guilty" to a judge or "I do" during your wedding are not really about conveying information - by saying those statements you are performing an action.
Falsely shouting "fire" is a false statement, but the problem happens when that statement is also functioning as the act of reckless endangerment (or incitement to riot, or whatever). Nobody cares if someone e.g. writes an op-ed that falsely claims the theater is on fire. It's the same misinformation, but the op-ed isn't also functioning as a performative act.
Far too often in these discussions, the different form of speech are conflated. A lot of the "balancing freedom vs safety" problem is a false dilemma that becomes a lot easier to reason when speech-as-information is separated from speech-as-action[2].
> The fact that that example became well-known due to the awful decision in Schenck v. United States does not mean that it's not a good example of speech that ought not be protected.
It might be an example for speech that ought not be protected, but it's not an example of unprotected speech grounded in either the preexisting case law or any precedential rule in Schenk that is still operative, so it's not a useful touchpoint for analysis. It's pure unsupported dicta sandwiched between two well-referenced points of law in the decision.
> Speech that is false and intentionally causes a panic should not be protected.
Intent is not an element of the well-known hypothetical described in Schenk, nor the even more abbreviated form derived from it raised in this thread.
And yet you don't have to look far from here to see lots of people claiming that a social network is violating freedom of speech if it censors or in any way limits the distribution of false claims that are intended to and likely to cause people to take dangerous actions.
I think keeping it legal that they cannot be sued based on user provided content should be changed to remove such protections if they are using it for political censorship of any kind. This is insane that we even need to consider to have a law for this. Companies come and go, it would equally be nice if we had no more bailouts, let them fail when they have to fail. On the other hand, I can see the value in large companies maintaining large bank accounts: in many cases it allows them to sustain their employee payroll in cases like COVID. I can't imagine what the unemployment rate would of looked like otherwise.
> How long until someone claims one of these images designed to cause trouble for AIs is free speech? And are they right?
They are not, any more running someone through a printing press to kill them is “free press”. Regulating intentional harm without reference to the particular means is not a violation of free speech or other expressive rights just because someone uses a tool that could instead be used as an expressive medium as a mechanism of inflicting the harm.
That would hold up as well as a claim that hurling rocks painted with political slogans through the windshields of passing motorists counts as free speech
This seems like hanging a rock painted with political slogans off the side of a bridge (presumably by some sort of rope) and letting people drive into it.
The trick is to construct a real looking advertisement that “accidentally” triggers this effect, and order the ad placed on digital billboards in Silicon Valley through a fake company located somewhere far far away.
Like the famous 2017 Burger King ad that deliberately hijacked nearby Google Home smart speakers. [0]
There's an argument to be made that it should be covered by cyber-attack laws. You aren't allowed to deliberately take control of someone else's computer without their permission.
Consider a hypothetical example: an author without use of their hands, who uses text-to-speech to write books and to control their computer. Imagine if they put you on speakerphone and you read out a sequence of commands to delete their work. That certainly wouldn't be covered under freedom of expression.
There's an obvious difference in degree, as spamming someone by hijacking their smart speaker isn't all that costly, but it's similar in category.
Your run-of-the-mill criminal has some sort of profit motive and this seems like a very roundabout way to attempt extortion.
A terrorist on the other hand would probably be dissatisfied with the not-quite-catastophic outcome. If you want to cause real havoc, there are far more valuable targets to hack into.
To be fair I think autonomous vehicles should have a requirement that they come with a RADAR or LIDAR as a backup sensor (even if its a low resolution one). Using ML on images for everything is not going to solve your safety problems cause theres no way to gaurantee the system will not crashed when presented with a fake input. Its a darn sight harder to pretend to be farther away on a radar or lidar though. At the very least the autonomous vehicle should be able to gaurantee you don't crash into static obstacles.
If you know that displaying unofficial highway signs will cause "self-driving" cars to swerve off the road, it's not much different an act from dropping breeze blocks off a motorway bridge.
I've no idea whether that'd be a highway crime, a computer misuse crime, or just attempted murder - that'd be a new one for lawyers in every country to argue.
There are already billboards with pictures of stop signs, stop lights, etc on them. It doesn't take any criminal intent. Things that were once just billboards are suddenly hazards.
I think banning Teslas as unsafe is even more reasonable conclusion. Those cars can't be trusted so they should not be allowed on public roads. Problem solved. Same goes for any other car with this issue.
Or human drivers for that matter. I heard that they have a very high failure rate and cause over 6,500,000 collisions a year in the US alone. Take them back to the drawing board and work out all the bugs until they can put a safe species of driver on the road.
This is like saying that investing in asphalt is pointless when horses can walk on dirt or cobble. The fact of the matter is that building new public transport is monumentally expensive, but roads themselves already exist. Installing new sensors and beacons would certainly not be a cheap task, but the bulk of the work is already done and there's no reason why these new systems couldn't integrate with existing roads that humans drive on. You can imagine for example a speed limit sign with a QR code containing information for the car to read, or a stop light using a radio transmitter to indicate its state. This infrastructure has to be upgraded anyways; why not make it friendlier for computers to use it? The costs pay themselves when crashes and traffic jams go away.
Is public transportation really monumentally expensive when compared to almost every adult spending somewhere between $5,000 and $150,000 every 5-10 years on a car? AAA says the average American spends nearly $9k/year owning/operating a vehicle. That'd buy a lot of public transportation infrastructure.
Public transportation is often less convenient. I've lived places where the hub-and-spoke bus model wound up taking longer than just walking from one end of the city to the other. How much is an extra half hour twice a day from a shorter commute worth to you? If nothing changes other than the public transportation infrastructure then I'm not sure that gap is easily bridged.
Right, but in return those Americans are getting transportation that actually takes them from point A to point B directly.
It's very hard to really gauge the impact this has on human productivity, especially given that in public transit you don't need to control your vehicle and so can be doing something else (although from my anecdotal experience, the vast majority of public transiters are doing nothing productive or interesting on their commutes).
But then there is also that, when a global pandemic happens and localities lockdown public transit, people with cars can still go places (like a National Park).
Point being, the problem is absolutely not one-dimensional and so just comparing aggregate expense is not very useful at all.
I think the goal for transport should be to maximize human productivity and mobility, and given the above and other realities, that almost always means a strong hybrid approach, not one where you relegate cars to a lesser status.
Indeed. Transportation is a hugely complicated issue. I'm not sure why people are downvoting the parent comment for pointing this out.
I suspect that to make real progress we will need to move away from our current assumptions. I'm not from the US, where perhaps the expectations and practicalities of transportation can be quite different to somewhere like Europe. But even here, there is a tendency to conflate mass transportation (buses, trains, etc.) with public transportation (vehicles run as a service by someone else, rather than owned personally and reserved exclusively for the use of their owner). As the existence of alternatives like taxis and school minibuses demonstrates, these are really two separate axes. And that's before we get to all the complexities of how we pay for transportation, both individually and as a society collectively.
There are some big advantages to providing individual transportation tailored to each specific journey. Going door to door at an exact time of their choosing can obviously be more efficient, sometimes several times more efficient, for the individual traveller than having to connect to and travel via a network of predefined locations on a predetermined timetable. There are also some health and security implications for travelling individually, and of course it can be much more comfortable because you have plenty of space, a guaranteed seat, your preferred temperature and ventilation levels, etc.
There are also some big downsides to individual transportation using the options we typically have available today. In particular, the individual vehicles we use right now are often not well suited to any particular journey. People are more likely to own a single vehicle, which they use for anything from taking their whole family on a long distance journey to a one-person drive to work or for shopping. That can be very inefficient in terms of space usage, environmental impact and operating costs. But then owning multiple vehicles, suitably sized and featured for different types of journey, is also very inefficient if each of them then spends most of the time sitting idle in a garage somewhere. And there are safety implications for using a smaller personal vehicle on the same roads as bigger, more dangerous vehicles operated by fallible drivers, as anyone who cycles around a big city can testify.
On top of that, you have an emergent system where there are millions of individual journeys happening on the same infrastructure with at best some very local co-ordination via things like lights at junctions. If you look at the mathematics underlying transport networks, you can see how this can lead to all kinds of surprising and sometimes very unhelpful overall outcomes, even if everyone is acting logically as an individual part of the system.
But there is no rule that says things always have to be this way. Sometimes mass transit uses dedicated infrastructure, as with trains and planes, and sometimes it's shared with individual transport, as with most buses and trams. Future infrastructure could be designed to support, as one possible example, both small and efficient personal vehicles and larger vehicles with space for more passengers and/or cargo that could be summoned on request, with preferred routes and timing specified by each traveller, but with the actual co-ordination of the vehicles administered centrally and the vehicles themselves operating autonomously.
It's obviously a huge jump from where we are today to such a system, and there would be all kinds of difficult questions about how we could migrate from one to the other even if we somehow all agreed on where we were ultimately trying to reach. But one thing is for sure: we can't even start working towards a system like that as long as we stick to preconceived notions of public transportation as mass transit with fixed timetables on large vehicles like buses or trains, and private transport as the family car as we know it today.
> The costs pay themselves when crashes and traffic jams go away.
Isn't there a bureaucratic problem hiding in that observation? Some money is saved by police and by individuals who would have crashed, but the departments responsible for upgrading the infrastructure will never get a return on their investment. In practice that'll mean higher taxes to support some new-fangled tech that doesn't immediately help the majority of people, and I'd be surprised if that gets support.
Any money spent on public transportation would be better spent upgrading infrastructure to serve autonomous vehicles.
Our bus system where I live travels with many busses nearly empty. This is what happens when they are funded by a percentage of property taxes rather than individuals paying per ride. A perfect example of government waste.
If nobody is using your public transport, that usually means that it's crap, not that it's overfunded. The vast majority of functioning public transport systems (that people actually use!) are not profitable. That is perfectly fine, because a public transport system, like many other types of infrastructure, has many positive externalities.
It’s actually a very modern and well equipped system. I think it’s fault is that it is sized for what they wish were the ridership numbers rather than what they are in reality.
They are probably ways to improve existing roads for semi-automated vehicles without creating dedicated infrastructure (which is by itself, not possible anyway, it's kind of hard to create dedicated streets in a dense city for example).
The roads could have captors/sensors/tags added gradually to them. For example, some near field devices planted in the road and broadcasting information such as the name of the road, mile mark, line number/side or traffic lights broadcasting if they are at red or yellow or green or signs such as Stops broadcasting their presence.
This would help autonomous vehicles a lot (instead of having to scan the infrastructure and deduce the environment, the infrastructure will broadcast the environment, at least the static parts, directly), while at the same time not be that costly (compared to the initial cost of the road or even its maintenance costs).
A number of dedicated test facilities for autonomous vehicles exist, some operated by universities, some by government agencies, and some of the autonomous vehicle vendors own their own (more common for those aligned with major automakers which generally already have large proving grounds, but Uber apparently has a small one).
I think the issue is that a dedicated test area will just never reflect the full set of problems you encounter in the real world, and for things like LED billboards it's going to be a lot cheaper to test in the real world than to buy and install your own. On the flip side, there's next to no money in developing "vehicle-dedicated infrastructure," so no one's that interested in testing on it exclusively.
An example from Hamburg: TAVF - Test Track for Automated and Connected Driving
~37-55 traffic lights, ~9km, 108.11p, city driving on 1-7 lanes (mostly 2-4) with on and off street bicycle lanes (Please don't hit me test cars!).
As a resident on the test track I have only once seen it in use. It would be fun to acquire a suitable radio and try listening in to what's being broadcast but that's a project for another day.
There are many reasons why I dont believe in self driving cars, the first (and strongest) is to imagine that the software was written by my collegues (and myself).
I mean in that case imagine these people being allowed to drive. There's no reason computers can't be better than people, and many reasons they could be better.
There are advantages and disadvantages to each. The computer failure methods are new to us, but we're all too familiar with human failings. Humans have much slower reaction times. Humans get sleepy, drunk, or distracted. Humans sometimes get angry or impatient and do dangerous things with their vehicles.
The important metric is accidents per mile driven. Right now, Teslas on autopilot seem to be significantly safer than human drivers. From Tesla's vehicle safety report[1]:
> ...we registered one accident for every 4.53 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.27 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.56 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles.
Yes autopilot has issues and limitations, but it's already 3x safer than unaided drivers. Also, it seems to be improving in safety. Two years ago, autopilot's crash rate was only 1.5x lower than unaided Tesla drivers.
Now are those metrics comparable? Are they normalized for conditions and location? That is situations where autopilot disengages and driver does not take action counted in?
Weather, type of road and so on has huge effect on number of accidents. Unless those are taken in consideration it's only lies...
I'm going to say that these are not normalized. It's fairly well known (and NHTSA data backs it up) that the accident rate for the types of vehicle Tesla builds (luxury 4/5 door sedans) accounts for a fraction of total vehicle accidents.
A portion of this is a result of the ability for young adults to purchases $30-40,000 cars: 16-19 yo drivers make up around 2/5 of the total accidents in the US.
Additionally, 4 door sedans in general (which describes most Tesla models) are only involved in approximately 27% of all accidents. The Model X, a compact SUV, only accounts for about 13% of all accidents.
I sort of find the idea of self-driving cars as ... selfish?
Don't want to drive, take the bus.
The idea that we all need our personal chariots (and now robotic chauffeurs) seems to me to be the height of irresponsibility.
And, yes, I am guilty of owning a car myself and feel guilt about it. But for some reason the self-driving bit just seems to take it to another level for me. Maybe too there's an air of elitism about it since we are in fact talking about cars that cost what most people make in an entire year.
Moving has a lot of inertia, and might not be possible if nobody is interesting in buying your house, or other area's rent is too high. Your area might have _had_ great service, but has rapidly degraded. There are tons of things outside your control. Having a car is the common answer to gain back some of that.
> But what if self-driving cars were 10x safer than human-driven cars?
But there is no evidence of this, or that they're even as safe as human drivers. In fact, Teslas are involved in more accidents than other luxury cars.
Self-driving cars is not just about not wanting to drive.
It's also about enabling Uber-style intelligent carpooling so that you can save energy while still maintaining similar times to driving, especially in places with poor or non-existent public transportation.
It's also about reducing the number of cars in the world. You probably use your car for less than 5% of your life. Most of the time it just sits there taking up space. Why do you need to own it? Imagine a world where you didn't need to own, maintain, or park. We can reclaim parking lots and plant trees, build housing, or other things. We can also reclaim a lot of our time spent in traffic jams or looking for parking and do something more useful with it.
It's also about enabling people who are unfit to drive to have a better quality of life. What if you've had a little to drink? What if you're down with a fever and a headache and can't concentrate on the road? What if you're sleepy? What if you are prone to seizures? What if you're blind, deaf, have poor motor skills? What if you're a visitor to an area without a public transportation system and not confident renting a car and driving?
Nothing I've seen of Elon Musk's behavior (boring tunnels meant primarily for cars in pods, anyone?) suggests that he wants fewer cars on earth. In fact, it would be a pretty bonkers philosophy with which to run the most valuable car company on earth.
To add, back in the old days when I still went into the heart of my local metropolis, Uber and Lyft style carpooling had the opposite effect of "reducing the number of cars". The streets are littered with pee bottles and packed with desperate drivers looking to score a quick buck, making u-turns across traffic while looking at their phones.
Because our primitive brains instinctively want to own things. We can share with family, and with good friends, but not with strangers.
Another thing: car is used not only to drive, but also as a storage space for things that you want to have always with you: a bottle of water, duffel bag with gym clothes, umbrella, etc. With rented car pools you'd have to carry all that with you.
Also, society where no one owns a car or an apartment, where everything is rented, "pooled", would be a totalitarian dystopia - try to misbehave, post a negative comment about government on Facebook and you're banned from a car sharing service as a punishment. Seizing your property is of course also possible, but much harder.
But what if not owning a car meant you saved an additional $400 per month, and reclaimed 1-1.5 more hours per day?
Personally, I think a lot of people would take that trade. I would.
Yes, there are people who like lugging everything but the kitchen sink with them around. For a lot of people in the world though "things you always want to have with you" fits in a handbag or backpack.
Self-driving “cars” include taxis (that's one of the prime motivating use cases), trucks (another big one), and busses (not as frequently talked about, but once the technology is there, it's going to get applied to them, as well.)
It's not just about individually-owned vehicles for people who want to own, but not drive, a personal vehicle.
Speaking in terms of pure economics, improvements to self-driving cars today will eventually lead to mass-market adoption and substantially lower costs for everybody.
Take cell phones for example, they were big, expensive, had only one function, and only available for the rich just about 30 years ago. Now they are cheap, powerful, pocket supercomputers available to mostly everyone.
Another example is farming and supply chain. 100 years ago you were pretty limited to the foods available in the local store, but now you can get anything your heart might desire, be that bananas from Mexico for 69 cents, or Sake from Japan for $20. The world has greatly benefitted from advancements in technology that was once only available to the richest.
Most of the mid-class luxuries we enjoy today, from airlines, to hot tubs, to game consoles and books on Kindle, all of that had to be developed and brought to light, first expensive, and then gradually cheaper and cheaper.
Eventually even the buses will drive themselves, and that will only happen if we invest sufficient energy and resources into making the tech available and improving today.
> I sort of find the idea of self-driving cars as ... selfish?
>Don't want to drive, take the bus.
A big issue with public transport is the “first/last mile” problem. You want to take the light rail to commute 25 miles to work but the station is two miles away from your house and the closest station to your work is over a mile. “Just walk” does not work for everyone, either because of weather (getting soaked by rain or sweating too much in your work clothes), or physical abilities, or time. So are you looking to take the bus to the station and from the station to work? How much time does that add? Do the schedules even line up well? If you buy a bike, is it allowed on the light rail, is there a safe place at your work for it? Do you drive a car and leave it at the station, and what do you do on the other side to get to the office?
Autonomous vehicles can/should be used in conjunction with public transport to solve the first/last mile problem. Hail a small electric autonomous car to take you to the light rail station, and hop into another one at the other end that takes you directly to work. It is still convenient for the individual and it still eliminates 23 miles of vehicle driving in a dense urban environment and removes the need for a parking space in a location that space is likely a premium.
Same with busses. They take forever because they have to stop so often. They have to stop so often because people can’t walk long distances to a more central location for pickup. People in the lowest incomes that, out of necessity, have to use the bus system may commute for multiple hours simply because of these inefficiencies. But if you had small 10 person shuttles servicing a limited area to bring local people to and from their front door to a more central bus location, you could reduce the number of stops a bus makes over a commuter distance by an order of magnitude and turn a 2 hour commute into a 40 minute commute.
There are lots of systems which support life and run software which, if it fails, will cause death. Literal life-support systems are one obvious example, but there are many many more. All of that software is written by your colleagues.
This is a special case of a general class of vulnerabilities in AI models, where an adversary can cause undesired output from the model by constructing input data not represented in the training set. However, it is legit much more concerning than, e.g. the issue of image classification models mis-identifying well-constructed noise as "panda".
This is currently a research frontier for AI so us non-experts likely won't be able to say a ton about it.
The thing humans have, that so far trained machine-learning models seem to lack, is a fairly reliable BS detector. I think the best example of this is an inkblot test. Sure I can look at an inkblot and see fanciful shapes, maybe a chair, maybe two rabbits kissing, whatever. But in the back of my mind, I also know perfectly well that I'm looking at an inkblot; I can entertain all the other classifications, but would never accept them as fact.
The question becomes, how do we give an AI the concept of a BS detector? How can we train it to know, somehow, that the classification it has just made with a high degree of confidence makes no sense? This seems like a very difficult problem to solve.
Yes. Flashing lights at the correct frequency and intensity can cause a noticeable fraction of the population to immediately seize. Emergency vehicles are a known trigger.
Less glibly, yes. There are particular arrangements of roads which are known to get people killed by cars. For example, an intersection at a particular angle with particular speed limits cause cars to stay hidden in each others' a-pillars as they approach the intersection. There are bits of highway that are known to cause human drivers to drift; this is why many stretches of highway now have textured strips down the side to notify drivers that they've moved out of their lane.
It varies from state to state, but in general, people with uncontrolled seizure disorders are barred from driving. They're able to drive if their seizures can be controlled with medication and if they go a certain time period without having a seizure. They're often required to submit medical reports to the state to prove that they can drive safely.
>There are a lot of similar failure modes in humans.
there are lot of similar failure modes in individual
humans. The difference is individual driver errors are not correlated, a bad reflection tricks a few humans at once instead of tens of thousands of cars running the same model. There is significantly more brittleness in a fleet of cars than in a population of humans, because the human population has diversity in judgement and experience.
This is not simple to fix because this uniformity is actually a feature of automated systems, it makes them explainable, produces expected results and conform and cheap, training a human for 20 years is more expensive than training all the cars once. However it also makes them collectively vulnerable, which is why heavy machines tend to be locked away on factory floors.
True. People need to stop comparing self driving cars favorably to the worst performing humans. Gives companies license to release these badly performing cars into the streets.
hard braking or swerving for clear false positives is not very assistive...
it seems to be the opposite of assistive. How can a person know all the weird things that will make a computer vision system badly fail. It shouldn't be used at all if that is the case
Some places in the Netherlands tested those. Imho they don't work as advertised. On a picture they might look 3d but when you driving towards them your brain knows the perspective doesn't change and doesn't trick you into thinking it is 3d.
The effect is more like: What is this for strange art!?
I wonder if these self driving cars would recognise those 3D crossings as the same as the normal thing. I could see it looking quite different to a neural network or something looking for features and a bit overfitted.
Yes, human reasoning is not just pattern matching but mixed with deductive thinking. A picture of a stop sign is not a stop sign, neither is a reflection of one. Trying to teach a neural network all of these exceptions seems like a terrible approach and would likely not work very well.
That's why getting a computer to do basic real-world shit that humans do with aplomb will likely require a merger of GOFAI with the new, statistics-based AI.
Hawaii (among other places) has banned billboards. It is actually quite nice not to have annoying advertisements all over the place. So, it is possible, but it will take a concerted political push.
Cars should not detect "obstacles". They should detect roads. They should not seek _exceptions_ to driveable space through classification- or segmentation-first strategies. They should only detect driveable space. Full stop.
You either detect a full-stopping-distance's worth of clear road through reliable (non visual) ranging with dense enough samples to preclude wheel-catching voids ... or you will continue to run into "objects" which slipped through your classifier.
I can accept reduction in this margin (e.g. in dense traffic flow) when it is first shown to work with margin.
You don't need to know it's a "car" or a "cow" or a "corvette" to know that the flat surface you are on ends in less than your stopping distance.
Semantically, it's a meaningless distinction. But in implementation it's much easier to detect the shape of the upcoming terrain using, say, lidar or regular old multi-camera disparity. Flat and following your map: that's driveable. I'm guessing many take this approach, but it's seeming more likely that Tesla does not, and instead trusts monocular or arrayed cameras to track objects though frames from detection-first regimes. Seems dangerous.
An automatic car which can only understand flat surfaces as drivable is not even walking. It's standing still, and is significantly more dangerous than a car slowing down and stopping in the middle of traffic.
Realising that a flat surface is drivable is probably 1% of 1% of what an autonomous car does on a road. Same goes for humans, I'd say.
You don't need only flat surface detection, but it should be the highest priority safety check. And it should be the primary check. We shouldn't turn onto one-way streets, of course! Just like we shouldn't swerve off streets when we detect something that can be spoofed ... such as the false positives shown in the article and the false negatives given by other Tesla crashes.
You can actually correct those instances by using reliable ranging devices with dense enough sampling to ensure flat surfaces without intermediate obstacles.
To say that there are no other factors at play is foolish, of course, but it's also foolish to say that somehow we shouldn't be doing this.
You can detect geometric hazards like potholes with reliable ranging devices.
I'm sure what I said was poorly worded. What I meant was that reliable ranging with dense sampling should be the primary mechanism that self-driving vehicles use to determine what areas of the road are safe to drive on, and probably what is even a road at all. I think efforts to avoid this are based on cost, not safety or functionality.
Other signals are just too transient to be safe. Vision-based classification-first regimes are not guaranteed to detect hazards as reliably as a point ranging device, is that controversial?
Potholes exist, but are geometric negative space. Clues to their existence can be provided by imagery or other sensors (which are subject to all the weather effects also, by the way), but a positive detection of good road or pothole should have priority.
Ok, I was too strongly worded, and you can read it as a reduction to basic ranging. That's fine.
I can rephrase: Reliable ranging should override other sensor mechanisms for detecting driveable space and presence of hazards.
Tracking with dense ranging (e.g., LIDAR) is not as challenging as the alternative: ensuring that non-ranging system will reliably segment all possible hazards.
I think the concept of the line of death, from browser security UI design, is relevant here. The space outside of the highway right of way is completely uncontrolled and the Tesla should know better than to look at a billboard hundreds of feet up in the air for regulatory signs. The MUTCD has rules about where signs are to be placed--look only in those places.
The problem is that they don’t have lidar and therefore can’t reliably measure depth, right? A billboard far away high up in the air may appear in the same direction as a closer road sign if you don’t have depth perception. You should be able to infer it from a camera with the right model but those methods are more fallible. Similar root cause to that time the guy got beheaded by a white truck perpendicular to the highway that looked like the sky.
I believe they are making depth perception using computer vision.
The idea is that if you can take an image and predict the distances to the objects in it, you can then create an image that is a transpose of the original based on your predictions, but at a different distance. In case of Tesla moving forward, that would be a split second later, closer to the original image. As the car moves forward, you actually take that second image and compare the virtual to the original.
At this point you probably get it wrong, but you adjust your predictions and recalculate until your virtual transposed image looks exactly like the real image, at which point you've got the distances pretty darn close.
I remember watching a talk about their computer vision, and it was pretty impressive. The challenge is that despite knowing where the billboards are, they still look that way because sometimes, in obscure situations, in little old towns with fuzzy rules, road signs actually do appear in weird places, and that's where the car often trips up because without sufficient data it just can't build a prediction model to know when to respond, and when to suppress the new input.
If you don't evaluate everything your camera sees, you basically just decrease its FOV. Sure, that might net you less errors for objects that are most relevant most of the time.
But it is the unexpected we need to train computer vision for to compete with humans. We can have rules for things like billboards, sure, but those will never be completely implemented everywhere.
The software just needs to be improved to identify something as billboard.
Someone else linked this[0] article that has a video of the attack. I think seeing the video will paint a different picture than what you have in your head. The "billboard" here is in a location that a stop sign would be, not your average highway billboard.
One would hope Tesla's system has some depth awareness. Honestly I'd be impressed how they could do so much if it didn't.
This is pretty important.
The way the guys does this is very close to how you could also fool a human by hanging up real stop signs in the wrong places, and "stupidity" people would start to respect them.
The only difference here is that a mere 500 ms flash elicites a response, where a human wouldn't notice.
on balance, a 500 ms response time does seem like a feature more than a bug compared to a human.
Are self-driving computers superior to humans and make less mistakes, or is it OK for a computer to be tricked by something some humans would also not detect?
Btw. I think I can tell if there is a real stop-sign or something on a billboard after some 'huh'-moment and without slamming on the brakes. And humans still go circles around AI regarding plausibility-checking very out-of-the-ordinary events.
It isn’t either/or. One system can be overall superior another and that system can have different vulnerabilities and weaknesses. Cars are generally considered better than horses for transport, but cars also cannot use grass for fuel.
I suspect flashing a stop sign sized image of a stop sign for 500ms on a tv mounted where the stop sign goes, at an intersection that should probably have a stop sign, would get most people to stop.
I thought the same thing and most certainly slow way the heck down while trying to figure out what the heck I just saw. I'd probably presume it was some sort of programmable temporary signage on the fritz and treat it like a traffic light with the lights out.
Of course you could hack a billboard to show a picture that would distract human drivers. Infact most billboard ads aim to do just that, since most humans on the road are driving (most cars commuting with 1 person).
Anything with software sadly becomes a weapon. Especially when its heavy bulky and potentially deadly.
There will be a lot of money thrown on adversarial attack mitigation. But how much is too much? At some point it might become so expensive, that it will be cheaper to switch to guiderail-based self driving, which can be overseen and secured centrally. And self-driving planes are probably going to be a nonstarter
Can this simply be prevented by sign edge detection?
Are there standard traffic sign dimensions?
If new Tesla can detect depth, then it should be possible to understand what is billboard and to ignore it.
I do not know laws in America is it possible to display real traffic signs digitally in billboards?
My wife nearly had an accident because of a huge ad billboard that suddenly displayed huge roaring flames, to advertise for some steakhouse.
My general feeling is that these sort of intrusive, dangerous, anti-ecological (such a billboard consumes as much power as a family of 4), anti-social (one less worker to change billboards) ads should be banned, period.
I worked at a place trying to do roadside LED billboards (~2009 before they were common) and the rules about content (in the UK, at the time) were fairly strict - nothing animated, no cuts only fades even between distinct adverts, etc. From what I've seen of the fancy new ones, those rules seem to be mostly still around (in London anyway.)
They also needlessly drive up land value around major roads, and I don't feel much sympathy for companies that subsist off of simply owning said eyesores.
I almost had an accident when a billboard that was dark suddenly displayed a very bright image on my side. My body reacted moving suddenly the steering wheel as a reflex action.
At another one having some big image being moved actually made my eyes follow the movement instead of the road with cars on it. I had to make work in order to focus on the road and not the screen.
It also makes your eyes to loose your vision in the dark.
I suggest a simpler solution. Billboards (particularly the LED array ones) should be forbidden, period. They are explicitly designed to capture attention away from drivers whilst they are doing a very dangerous activity. Why? Because they attract customers and make businesses money. This kind of thing needlessly has a human toll, and we could eliminate it with the stroke of a pen.
The entire concept of roadside advertising should be seriously looked at. The roads are dangerous enough as it is - especially with attention gradually being degraded by cockpit distractions - without superfluous equipment which is specifically designed to attract attention.
As an anti capitalist (who has taken a fair bit of flak on this board for it for obvious reasons), that sounds intriguing. As a road trip enthusiast, that sounds either:
- terrifying: finding services in far flung places could be nearly impossible
- likely to recreate both of the problems I’d want it to address in potentially more harmful ways: for road safety, it means discovering services would depend even more on electronic devices inside the vehicle and further from the safe driving field of vision; for profit opportunism preying on drivers’ eyes, it consolidates the market much further than it already is, and with even greater harmful incentive
Astute points; however distasteful advertising is to me, contextual and pragmatic advertising slips past that reaction and just seems "useful". But where would the line be drawn?
Anyone else getting strong "I, Robot" (the movie) vibes from this line of research?
I can see this happening as an opening shot to an action movie.
Some reporter finds out that $government is doing $bad_thing. They pack up the evidence, rush out the door to talk to their editor. A slick, self-driving car pulls up as they exit the building, they enter and pull away.
It is a rainy night, and the car zips along the city roads along thousands of others; just another regular evening. Cars passing each other at break-neck speeds with safety margins that are only really safe with positronic brains at the wheel.
We see the street, as seen by the computer. Bounding boxes and labels hovering over multiple viewpoint video feeds, all drawn in slick black & blue cyber aesthetic. Suddenly something changes at the left of the image, alerts go off.
Cut to the exterior, super slow motion shot of the care from back right angle, with a big billboard visible on the left side of the road. It shows cryptic patterns that seem to depict running children, but never the whole child, just portions of them, in changing shapes and positions, with neural network adversarial noise covering the rest of the image.
Cut back to the board computer's view, still slow motion. Red bounding boxes surrounding the billboard, and then a big red warning flashes in the center of the screen:
COLLISION IMMINENT.
All kinds of meters at the bottom of the dashboard start going wild as the car tries to find a safe exit state, and with the slow-motion slowly approaching realtime again. It swerves to the right. The image gets rocked as it gets hit by another passing car. Another hit. The world on screen turns as the car goes flying.
Static noise fills the screen.
Next scene:
A news broadcast announces that there was a high casualty in accident on Suchandsuch Road last night. Investigators find that it was most probably caused by a repair shop installing a faulty aftermarket AI module into a car.
That's why I mentioned "I, Robot".
I'm not sure yet who manipulated that billboard, evil three letter agency or rogue AI.
But I am sure that this is a detective story and that Will Smith will be either the detective or the unwitting bystander who gets drawn into the whole mess. :P
I always find these things over hyped. Like yes, you can trick an autonomous car into stopping or crashing. But you can also trick a human into stopping or crashing by running into traffic with a construction hat and stop sign, or dropping rocks from an overpass. The reality is that road safety relies on the vast majority of people agreeing to not kill each other.
reply