This is clever, and Tesla's software is awful, but anyone doing this for real, with intent to divert cars from the road etc. would be committing a pretty serious crime.
Both good points - so this kind of crime would likely be indistinguishable from a random, terrifying software failure, with the blame falling on the car maker.
I mean it is a failing of the manufacturer on some level. Not saying someone intentionally exploiting the car isn't also at fault, but I also don't think it's unrealistic to hold the manufacturer accountable as well.
Tesla logs so much data and stores it and has used it to defend itself in various legal / PR matters. I wonder if they would catch it if someone did this. Or maybe suddenly they wouldn't have that data lol
Very often I see these tiny articles in local newspapers, 15 sentences, that go something like 'The driver lost control of their vehicle and died in the crash'.
I've had some terrifying software problems in Audis, including random shutdowns while driving due to faulty sensors, how often do you think someone who 'lost control of their car', which implies that there was human error, died because their car's electronics or software were faulty?
Yeah! I'm sure that's happened in software in less sophisticated, but maybe more understandable ways. Lane-keeping & auto-braking software - which loads of cars have - is absolutely terrifying to think about, and I'd be fascinated to see these attacked from a research POV.
My old Celica tried to murder me on the M62 and Toyota denied the problem for years (the gas pedals got stuck under the floor mats). And that was just a really obvious (in retrospect) design flaw - for how long could a car maker viably deny safety problems in their software?
Shouting fire in a movie theater is not a good example of unprotected speech, and as such it's not a good place to start reasoning from in deciding of other speech is unprotected. Otherwise, you start banning core political speech like advocacy against the draft, just like the since-overturned case that bad example comes from.
The fact that that example became well-known due to the awful decision in Schenck v. United States does not mean that it's not a good example of speech that ought not be protected. Speech that is false and intentionally causes a panic should not be protected.
> Speech that is false and intentionally causes a panic should not be protected.
What is true or false isn't always clear though, even if there's a popular or official consensus.
The point of having a public forum is to give new ideas a chance, as unheard of or unpopular as they may be. Of course, with that you get the risk of misinformation as well.
As a society we must choose how much risk we're willing to tolerate, and really consider if the the downsides of the level of free speech we have today are really so much different than what we've historically dealt with.
>The point of having a public forum is to give new ideas a chance, as unheard of or unpopular as they may be. Of course, with that you get the risk of misinformation as well.
So since the ideas behind scientific racism, the dark enlightenment, white supremacy, Nazism and anti-semitism, and any prejudice based on conservative or religious ideals (such as sexual or gender discrimination or anti LGBT prejudice) are neither new nor unheard of, it's OK to stop giving them a chance?
I mean... flat earth? Do we really need to make sure the flat earthers have as wide a platform as they want just in case they turn out to have been right all along? Do we really need to vigorously debate every new iteration of "maybe the Jews do deserve to be driven into the sea" or "maybe eugenics is a good idea" that crawls out from under the rocks of society every few years? Do we need to keep the flame of "Bill Gates created COVID as a pretext to tag everyone with mind-control chips to make it easier to harvest adrenochrome for satanic blood magick" alive for future generations to ponder the possible wisdom of?
Maybe we can agree that, while there may be subtle cases where legitimate scientific or political ideas which run afoul of the status quo and public acceptance need protection in a free market of ideas, most of what's actually being defended in these discussions is regressive garbage that can be tossed back into the wastebin of history from whence it came, with nothing of value lost?
At what time do we decide we have reached the "TRUTH" and decide to stop giving ideas a chance?
If we were to take your argument back in time and apply it, what would we see?
"I mean...heliocentricism, really? Do we have to give Copernicus slosh any thought, we all know that God created the earth and then the heavens so it is only natural that they orbit the earth."
On the front page at the same time is a story about psilocybin and "What if a Pill Can Change Your Politics or Religious Beliefs?".
Right now we accept that sexual orientation is not a choice. What if in the future we find that exposure to chemical/compounds can cause people to become more gender norm conforming? What about less conforming to historical norms?
How would you propose we handle those possible futures? A drug that turns you gay is straight out of Alex Jones. A drug that turns men straight is straight out of conversion therapy.
> At what time do we decide we have reached the "TRUTH" and decide to stop giving ideas a chance?
OK, so I see you've decided to ignore the context of my comment, pretend I was referring to some absolute "TRUTH" then strawman it.
Yet everything you've listed, being scientific and therefore part of a framework in which those ideas could be proven or disproven, are part of what I mentioned at the end of my article, which I will quote verbatim for you: cases where legitimate scientific or political ideas which run afoul of the status quo and public acceptance need protection in a free market of ideas and thus explicitly not germane to my point.
How do I suppose we handle science? Science. But then nothing I listed was actually science, nor is there any valid controversy around any of them, as science has invalidated practically all of it already. There is no universe in which it turns out Hitler was right all along if only we'd listened.
So let me flip the script on you - how long do you believe we should let ideas like white supremacy, anti-semitism, lunatic conspiracy theories like QAnon, anti-vaxx etc remain in the marketplace?
Or even keeping it within the bounds of science - do you believe we should still be arguing the merits of the luminiferous aether and miasma theory, or "teaching the controversy" around creationism?
While the veracity of a statement is important, the problem is not really about "misinformation". Technologists and intellectuals are used to viewing speech as an form of information transfer. However, speech can also be performative[1]; saying "not-guilty" to a judge or "I do" during your wedding are not really about conveying information - by saying those statements you are performing an action.
Falsely shouting "fire" is a false statement, but the problem happens when that statement is also functioning as the act of reckless endangerment (or incitement to riot, or whatever). Nobody cares if someone e.g. writes an op-ed that falsely claims the theater is on fire. It's the same misinformation, but the op-ed isn't also functioning as a performative act.
Far too often in these discussions, the different form of speech are conflated. A lot of the "balancing freedom vs safety" problem is a false dilemma that becomes a lot easier to reason when speech-as-information is separated from speech-as-action[2].
> The fact that that example became well-known due to the awful decision in Schenck v. United States does not mean that it's not a good example of speech that ought not be protected.
It might be an example for speech that ought not be protected, but it's not an example of unprotected speech grounded in either the preexisting case law or any precedential rule in Schenk that is still operative, so it's not a useful touchpoint for analysis. It's pure unsupported dicta sandwiched between two well-referenced points of law in the decision.
> Speech that is false and intentionally causes a panic should not be protected.
Intent is not an element of the well-known hypothetical described in Schenk, nor the even more abbreviated form derived from it raised in this thread.
And yet you don't have to look far from here to see lots of people claiming that a social network is violating freedom of speech if it censors or in any way limits the distribution of false claims that are intended to and likely to cause people to take dangerous actions.
I think keeping it legal that they cannot be sued based on user provided content should be changed to remove such protections if they are using it for political censorship of any kind. This is insane that we even need to consider to have a law for this. Companies come and go, it would equally be nice if we had no more bailouts, let them fail when they have to fail. On the other hand, I can see the value in large companies maintaining large bank accounts: in many cases it allows them to sustain their employee payroll in cases like COVID. I can't imagine what the unemployment rate would of looked like otherwise.
> How long until someone claims one of these images designed to cause trouble for AIs is free speech? And are they right?
They are not, any more running someone through a printing press to kill them is “free press”. Regulating intentional harm without reference to the particular means is not a violation of free speech or other expressive rights just because someone uses a tool that could instead be used as an expressive medium as a mechanism of inflicting the harm.
That would hold up as well as a claim that hurling rocks painted with political slogans through the windshields of passing motorists counts as free speech
This seems like hanging a rock painted with political slogans off the side of a bridge (presumably by some sort of rope) and letting people drive into it.
The trick is to construct a real looking advertisement that “accidentally” triggers this effect, and order the ad placed on digital billboards in Silicon Valley through a fake company located somewhere far far away.
Like the famous 2017 Burger King ad that deliberately hijacked nearby Google Home smart speakers. [0]
There's an argument to be made that it should be covered by cyber-attack laws. You aren't allowed to deliberately take control of someone else's computer without their permission.
Consider a hypothetical example: an author without use of their hands, who uses text-to-speech to write books and to control their computer. Imagine if they put you on speakerphone and you read out a sequence of commands to delete their work. That certainly wouldn't be covered under freedom of expression.
There's an obvious difference in degree, as spamming someone by hijacking their smart speaker isn't all that costly, but it's similar in category.
Your run-of-the-mill criminal has some sort of profit motive and this seems like a very roundabout way to attempt extortion.
A terrorist on the other hand would probably be dissatisfied with the not-quite-catastophic outcome. If you want to cause real havoc, there are far more valuable targets to hack into.
To be fair I think autonomous vehicles should have a requirement that they come with a RADAR or LIDAR as a backup sensor (even if its a low resolution one). Using ML on images for everything is not going to solve your safety problems cause theres no way to gaurantee the system will not crashed when presented with a fake input. Its a darn sight harder to pretend to be farther away on a radar or lidar though. At the very least the autonomous vehicle should be able to gaurantee you don't crash into static obstacles.
If you know that displaying unofficial highway signs will cause "self-driving" cars to swerve off the road, it's not much different an act from dropping breeze blocks off a motorway bridge.
I've no idea whether that'd be a highway crime, a computer misuse crime, or just attempted murder - that'd be a new one for lawyers in every country to argue.
There are already billboards with pictures of stop signs, stop lights, etc on them. It doesn't take any criminal intent. Things that were once just billboards are suddenly hazards.
I think banning Teslas as unsafe is even more reasonable conclusion. Those cars can't be trusted so they should not be allowed on public roads. Problem solved. Same goes for any other car with this issue.
Or human drivers for that matter. I heard that they have a very high failure rate and cause over 6,500,000 collisions a year in the US alone. Take them back to the drawing board and work out all the bugs until they can put a safe species of driver on the road.
reply