>That being said, I still want this tech on the road ASAP, not because it's perfect, but because human drivers are So. Fucking. Awful.
This narrative is so pervasive and I must admit a bit strange to me. Millions and millions of people strap themselves into thousands of pounds of metal and then drive 80 down the highway while texting and changing the radio and telling your kids to please stop fighting surrounded by other people doing the exact same thing. And we do that every single day and almost all of us are fine and will never have a 'major' accident. Humans are pretty fucking great at driving.
Besides to say that human are bad (or good) at driving would require some comparison, we're bad at it compared to what? As far as we know humans are the best drivers in universe. There's no evidence that self-driving is currently better than human drivers and there's no evidence that self-driving cars will soon be better than human drivers and there's no evidence that if we allow tons of self-driving cars on the roads then they will be better than humans. It's just blind hope, it really makes me want a word for whatever the opposite of a luddite is, instead of a belief that we can destroy machines to benefit people we believe we can destroy people to benefit machines. It's all twisted up.
If you want to be an Uber or Tesla crash test dummy and walk/bike across a test track while they see if their vehicle can avoid you then by all means go for it. But don't just foist that upon average people.
It's not that humans are bad, it's more about humans being unreliable by definition. That's why we have autopilots, industrial robots, elevator or traffic light control system, or, in general trying to automate stuff.
And we could install some cheap device in each cars to detect when a driver is tired and keep him awake, and detect a drunk driver and stop him drive or perform some other action.
Until most of people will afford a self driving car you would still have to face the drunk, or the ones that text so you will not remove those of the roads.
There are luxury cars that attempt to detect when a driver is falling asleep, and people are sometimes mandated to have an interlock on their cars that prevents starting if they have alcohol in their breath. So if these things are not universal, presumably it's because of cost-benefit analysis, not because they need to be invented.
I was involved back in 2003-2004 with an Australian company called SeeingMachines (I worked for a UK redistributor).
These guys made a remote video face-and-eye tracker that they sold to automotive companies wanting to research different dashboard designs -- they would look for blinks and head-nods to measure how fatigued a driver was, and to optimise the dashboard design to keep the driver awake.
I'm sure they are now selling their technology to automotive OEMs for placement in driver-facing camera ECUs.
Their technology worked OK back in 2003. I'm sure it is very robust and mature by now.
Which is why they aren't quite legal yet. And it's arguable anyway, given the noise in the signal (a tiny handful of fatalities to compare with the ~100k "normal" traffic deaths over the same period).
But like all new technology, it's improving. People aren't.
They are also better pilots than self-flying planes. And better writers then self-writing blogs.
And so on. There, of course, are many examples of things in which computers have outpaced human capabilities, and it's certainly plausible that some day this could apply to cars. But it hasn't happened yet and it's also more than plausible that none of this will happen in the lifetime of anyone now living.
For some values of 'human' e.g. alert, sober. Which is sadly, a minority?
Its arguable that self-driving cars are better already. What we need to do is, secure the roads so they won't get confused (fence them & remove pedestrians/bikes). I know, we can't do that everywhere but we could do it for long-haul etc. Like freeways already do.
>For some values of 'human' e.g. alert, sober. Which is sadly, a minority?
I think the vast majority of human drivers are alert and sober. It's the long tail of drunk / aggressive / distracted drivers that causes crashes.
>Its arguable that self-driving cars are better already. What we need to do is, secure the roads so they won't get confused (fence them & remove pedestrians/bikes). I know, we can't do that everywhere but we could do it for long-haul etc. Like freeways already do.
I mean as a human driver, I obviously love limited access roads (and human drivers perform very well on limited access roads). We won't have more of those though - the cultural paradigm for urban planning is one of tearing down freeways, adding bike-lanes, and ensuring pedestrian friendliness for maximum walkability.
Everything you've listed involves repetitive actions (or complete lack there of, if my limited understanding of aviation is accurate), which is known to cause humans to lose focus (and make mistakes).
Driving a car, on the other hand, is usually NOT a repetitive, dull process. You're comparing apples to oranges.
Driving a car is absolutely a repetitive process. It's improvisational to a degree, but the mechanics and decision-making in response to stimuli are consistent, learned behaviors. And it's hard to argue that humans are mechanically better at driving than machines can be - better sensors, better feedback, more direct connections, better reflexes. So we're only questioning whether self-driving cars can respond to unexpected stimuli better than humans.
Technically, you are correct, but the spirit of my argument is in disagreement. Driving has a repetitive list of tasks, but unlike an elevator, the "buttons" to accomplish the task are moved around every single time. Sometimes new buttons appear, and sometimes some go missing. It's a chaotic environment. Thus, it is not a repetitive process in my mind, because a chaotic environment is not repetitive. It's like saying "A chaotic environment is repetitive because it's always chaotic".
The problem is there is no proof AI is better then humans, probably it can't ever be better then a human with the current hardware in the current cars, but we have "hopeful" people that want to sacrifice lives to gather data for shitty startups that need the self driving cars ready this year or it will fail.
So we need this:
-real numbers of AI to have a meaningful discussion on real statistics
-we need good statistics, so not compare different car types and driver ages/categories
-IMO we have a low hanging fruit related to preventing drunk drivers to use a car, a system to detect a driver that is not looking ahead for more then n seconds, someone needs to handle this, it would improve the numbers a lot
-when we compare say a chess AI we compare it with professionals so it would be more honest if we want to say that self driving AI is better we should compare with a driver with experience that is not drunk
-we should not RUSH this because it will backfire,wait for hardware and software to be ready so if a concrete object is in the front of the car the radar can detect it's position and the cameras and image recognition can detect what it is and not ignore it like it is a road sign, hardware and software should be ready to detect signs, bottles, people, bikes, animals as good as humans, if is not as good as a human then we already lost since a human also many more "features" like intuition,
Why does everyone assume the vendors don't have data? They're recording full sensor logs in these tests, tracking everything they can. They wouldn't even be in this field if they didn't think they could outperform humans - and dozens of companies are doing this, from tiny startups to Google to General Motors. No one wants to take on the liability of crappy systems that perform worse than humans.
One interesting point of the development is that every failure is an opportunity to learn. The same cannot be said for human drivers. When the automation doesn't detect a hazard correctly, they can analyze it and tweak it so it detects better next time. Do humans do that?
Yes, that is the hope, but the problem is it "CAN" but this does not mean it actually learns, if you seen what this AI do is just ignoring most of the data since the hardware is not powerful enough, the reality should be that the radar and cameras would work together and identify all the detected objects and not ignore any input. Tesla hit a firetruck, then a police car, it killed 2 humans in similar conditions, did Tesla cars learned from it anything? if it did it was not enough
During some political protests a couple of years ago, in my part of town, a driver panicked and chose to drive through a crowd of high school kids crossing a street, with a walk light and police supervision. Luckily no one was killed, but one girl went under the car and had her leg crushed.
Do cars actually live stream everything?
If not did Tesla and Uber extracted the data and did any improvement? I did not read anything about that, just blaming the driver and obfuscating the truth.
I am not against the self driving car idea, I can't drive because of my eyes and my son also has bad eyes(not as bad as me ) and epilepsy so he probably will not be able to drive a car, I am disappointed because the startups fucked this up by pretending that they can solve the problem with the crap weak hardware and software they have, as long as you can't detect the exact object position and use image recognition to correctly tag all objects and compute all speed vectors this can't be solved and it is clear the current hardware Tesla and Uber use is not enough.
Self driving may the future but is not the present and it is not the near future either.
Imagine an AI doctor,would you feel good if you know is better then a drunk doctor?
Would you send your child with a self driving car that is better then a drunk driver but not then yourself? ,after 1000 kills the ANN would MAYBE adjust some weight somewhere and the changes will increase a bit
Oh, that wasn't a Tesla, just an ordinary car, driven by a person who was terrified of that many young black people in one place. Which rolls around to a point that I keep making, that humans are emotional, and their emotional states greatly affect their driving, to the point of deliberately doing things that will hurt or kill others. Road rage is a common problem.
I agree self driving is not the present, but I think it's the near future - 10-20 years.
And to be absolutely clear - I believe that full-on self-driving technology, with no human assist or failover in normal on-road cases, will be a better driver than me within the next decade. I've had my driver's license for 37 years, and I love driving. I've had two moving violations in my life (last one over a decade ago), haven't been in an accident in 17 years, and haven't been in an injury accident ever. I'm a really good driver. And I'm kind of neurotic about being in a car with other people driving - it makes me nervous, even when I know they're good too.
But I'm totally looking forward to the day I can trust my car to do the driving for me, knowing it's better than I am.
Then we agree in most things, except I am most pessimist on how much time it will take for the technology to be at the level of a good human driver. Maybe if we get cheap and good enough radar/lidar or similar sensors maybe we could at least get cars that would not hit static objects.
The AI needs/has advantages like you said but those are needed to compensate the big missing part , the human intelligence, I am afraid of stupid scenarios where say a bridge would collapse and all the self driving cars would continue jumping of the bridge because this exact bridge colapse was not programmed into the car.11
>Do humans do that?
do you have proof that with current software and hardware Tesla or Uber can make an AI better then a human?
Because it feels to me they are faking it to buy time until some new hardware is ready. They have the data but the hardware is to slow so they can't analyze it in real time so they ignore it, even if say they want to put a concrete related update it may be too intensive and it won't run in current hardware.
Well said. These SDV threads are getting pretty annoying that we keep cycling the same arguments at each other and there is no substantive change to how good SDVs are yet.
Interesting factoid... half of US traffic fatalities occur between the hours of 9pm and 3am on friday and saturday nights. Average drivers in average situations are, indeed, pretty safe. But drunk drivers? Angry drivers? Texting drivers?
If autonomous cars can drive at the skill level of average drivers - and not get drunk or angry or distracted by forwarding cat videos - then they will significantly outperform humans.
Additional evidence in the form of reason... autonomous drivers have better sensors than humans. Human drivers have limited vision, that only points in one direction at a time. Mirrors offer some slight augmentation, but. Ever backed out of a parking space in a crowded parking lot and gotten into an accident because a car you couldn't see before was suddenly right behind you? Common situation. Autonomous drivers won't have that problem. They can see out the back and to the sides, 360 degrees at all times. (I'll note that some of the first deployed autonomous tech was cars that can parallel-park themselves.)
Additionally, autonomous systems have better reflexes. Remember the two second rule? That's to deal with the delay from your eyes to your brain to your hands and feet. Machines can physically respond to changes in situation faster than humans.
Now, let's level up. Imagine a system where autonomous vehicles are transmitting short-range telemetry to nearby vehicles, letting them know their status, including velocity. They can relay back emergency status info - so the car that slams its brakes to avoid a pedestrian doesn't cause someone six cars back to rear-end the car in front of them. Merge-related accidents (related to parking-lot problems) can be reduced to nothingness with telemetry. Think about how Waze is doing this already with driving data for their mapping software. It'd work better with less human involvement.
I can go on and on, but really, I'm gonna throw a bullshit card here. Fear of autonomous vehicles is magical thinking trumping reason. Humans do magical thinking all the time. We think safe things are scary, and we think scary things are safe, for symbolism rather than fact. That's why people in this discussion used the phrase "murderbot", but not "murderer".
>Interesting factoid... half of US traffic fatalities occur between the hours of 9pm and 3am on friday and saturday nights. Average drivers in average situations are, indeed, pretty safe. But drunk drivers? Angry drivers? Texting drivers?
There is exactly zero proof that self-driving vehicles are better than drunk drivers. There is exactly zero proof that self-driving vehicles are better than angry drivers. There is exactly zero proof that self-driving vehicles are better than texting drivers. There is zero proof they will soon overtake any of those categories.
>Additional evidence in the form of reason... autonomous drivers have better sensors than humans.
Teslas have to ignore stationary objects because they can't accurately estimate the height of those objects. My 'sensors' know when a tire shred is on the ground and about 2 inches high and when a bicyclist is on the ground and about 5 feet high.
>Additionally, autonomous systems have better reflexes. Remember the two second rule? That's to deal with the delay from your eyes to your brain to your hands and feet. Machines can physically respond to changes in situation faster than humans.
I have driven while texting, I still sometimes drive angry and I have (unfortunately) driven drunk in the past. I have never made the very simple mistake made in that video. Where is the quick mechanical reflex you are leading me to believe exists in self-driving vehicles?
>Now, let's level up. Imagine a...
No. Your imagination is running wild imagining something that doesn't and won't in the near future exist. Your imaginary car not only is covered in sensors, but can p2p communicate with the cars around it, has no lag in processing complex input, can behave itself among human drivers, etc. None of this actually exists, none of it is actually out there and we really don't have good reason to expect it any time soon.
>I can go on and on,
I believe you could, but it's just your fantasizing, it's not a realistic portrait of what is actually happening.
>I'm gonna throw a bullshit card here.
Going to? What about the several you've already sent my way lol.
>Humans do magical thinking all the time. We think safe things are scary, and we think scary things are safe, for symbolism rather than fact.
Exactly! That's why people in this discussion called humans "So. Fucking. Awful." at driving and then wrote three paragraphs imagining a utopian, non-existent self-driving system which magically solves all problems and is really quite safe. Before diagnosing magical thinking in others you might attempt detecting it in yourself.
You keep insisting there's zero evidence of the performance of autonomous cars relative to human drivers (you're talking about drunk/texting, but I'm distilling the generalization from it). What evidence do you have that there is zero evidence? After all, we have off-road lab results from many manufacturers. We have on-road results. Some early-stage autonomous tech is already widely available and in use today, such as collision detection with auto-braking to prevent rear-end collisions. There is TONS of data. Stop pretending it doesn't exist.
Your video is an interesting example. That is not a fully autonomous vehicle, but rather an assisted one, one that expects human engagement. In the example case, it failed because of human error. Now, there's a good argument that semi-autonomous vehicles will lead to additional human error, but that's not saying what you're saying.
If you live in an icy climate (I do), and have been driving for a long time (I have), then at some point, you would remember the transition to anti-lock braking systems. Before them, we had to learn to pump brakes in certain ways on icy roads, to prevent locking. Now, just push down the pedal, and a computer does it for you - one that senses brake lock before it happens. It made winter driving much safer.
Your criticism of Tesla sensors is flawed. You're assuming that their sensors will not get better. That seems an unreasonable and unlikely conclusion. And you're ignoring my point about 360 degree sensor coverage. I'm an excellent driver, but I find myself in danger regularly because I can't see everything.
>What evidence do you have that there is zero evidence?
You're really contorting yourself here, just give me an apples to apples comparison of human driving safety to self-driving vehicle safety and I'll be happy. (Hint: none exists)
>That is not a fully autonomous vehicle, but rather an assisted one, one that expects human engagement. In the example case, it failed because of human error.
No no, no word games. That is a Tesla marketing/liability line. What you see in that video is the current state of self-driving. And a product marketed as self-driving which can't steer around a stopped vehicle in the lane isn't really self-driving at all. Besides if what you're talking about is Level 5 type self-driving systems, you have nothing supporting the idea that we get there soon or that current self-driving systems will get us there.
I mean besides if drunk drivers could navigate around that car then you're basically admitting that drunk drivers are still better than the current state of self-driving technology. (something I agree with)
>ABS
...
I think you're misunderstanding me. I'm not against technological progress which makes vehicles safer. I'm against the understanding that self-driving technology is making cars safer.
>Your criticism of Tesla sensors is flawed. You're assuming that their sensors will not get better. That seems an unreasonable and unlikely conclusion. And you're ignoring my point about 360 degree sensor coverage. I'm an excellent driver, but I find myself in danger regularly because I can't see everything.
And you're assuming that their sensors will leap and bound ahead magically. You're trying to make me argue that humans are better drivers than the perfect system that exists in your head. I'm saying the perfect system that exists in your head is irrelevant because it is not going to become reality.
I'm not saying autonomous vehicles will be perfect. I'm just saying they'll be better than humans. Which isn't exactly a high bar.
As for drunk drivers navigating around that car? There's an intersection a couple of miles from my house that has a stoplight. Southbound traffic is approaching from highway speeds, climbs over a hill, hits a 40mph zone, then the stoplight is a couple hundred meters. There have been multiple fatal collisions there by, well, drunk drivers, slamming into cars stopped at the red light at highway speeds (or faster). I'm terrified any time I'm stuck at that light after dark, especially on weekends.
Of course, an autonomous car would detect the speed zone change and the red light.
I'm saying that actually that bar is extremely high.
>Of course, an autonomous car would detect the speed zone change and the red light.
Would it? I think it's actually unclear that it would and besides this experience of yours is filling in for the place in your comment where you should be showing me the data that shows self-driving cars are safer.
If an autonomous car can't read speed limit signs and see stoplights (and doesn't have an up to date on-board map of them), then that system has no business being on the road. But I'm assuming that low-hanging fruit like reading traffic signs and signals is a solved problem.
Assuming that it isn't a solved problem is kind of irrational. If your argument is "Well, maybe the car won't see the traffic light it already knows is there", then perhaps your argument isn't very good.
>If an autonomous car can't read speed limit signs and see stoplights (and doesn't have an up to date on-board map of them), then that system has no business being on the road.
Then you're kind of going against your original comment where you said you wanted self-driving vehicles on the road asap. All you have to do is google: Tesla Speed Sign and you'll find tons of problems and examples of Teslas misreading signs. Here are some examples:
My AP1 car has interpreted a 35 mph sign as an 85 mph sign.
---
My understanding is AP1 visually identifies speed signs, but can screw up (reported that our highway 80 is misread as 80 mph)
---
Addendum: a 45 mph zone, permanently signed (white/black sign large - not orange/black sign small), which by the way AP2 completely ignores. See error-ridden, and in this case an example that creates a safety/financial hazard not present with AP1.
---
And on some highways in the area, especially where there has been recent construction, the speed limit database thinks the speeds are 45 or 50 MPH in sections - when the posted speed limit is supposed to be 60 or 65 MPH. If the car is running under AutoSteer when it hits one of these sections, the car immediately slows down, because AutoSteer won't exceed the speed limit by more than 5 MPH on non-highway roads.
>(and doesn't have an up to date on-board map of them)
This is also something that doesn't always happen in real life. Here's a quick example just from googling:
The "new" speed limit database distributed last summer (from TomTom?) has numerous areas with missing speed limit data or incorrect speeds (both high and low), which causes issues with AutoSteer. And if this happens while driving at highway speeds, AutoSteer will try to quickly slow down in the middle of high speed traffic. While you can disengage AutoSteer and retake control, the other option is just to temporarily leave AutoSteer/TACC running, and manage the speed yourself until the software sees it's back in an area with the correct speed limit.
>But I'm assuming that low-hanging fruit like reading traffic signs and signals is a solved problem.
You know what they say about assumptions lol.
>Assuming that it isn't a solved problem is kind of irrational.
One, it has nothing to do with rationality. Two, it's actually not a solved problem.
>If your argument is "Well, maybe the car won't see the traffic light it already knows is there", then perhaps your argument isn't very good.
Or maybe the state of self-driving technology isn't actually as advanced as you believe. I'm glad we can agree that these systems have no business being on the road though.
Waymo has been transporting around the 400 people that applied for 8 months without any safety driver.
It has 7 million miles under the belt without any casualties and orders of magnitude more of simulated miles.
>If autonomous cars can drive at the skill level of average drivers - and not get drunk or angry or distracted by forwarding cat videos - then they will significantly outperform humans.
As long as you use the word "if" you can say pretty much anything you want.
>Additional evidence in the form of reason... autonomous drivers have better sensors than humans. Human drivers have limited vision, that only points in one direction at a time.
That is not evidence. It is your own opinion. Nothing of the sort has been proven to work reliably.
>Autonomous drivers won't have that problem. They can see out the back and to the sides, 360 degrees at all times. (I'll note that some of the first deployed autonomous tech was cars that can parallel-park themselves.)
They can't "see". They can only sense what they have been programmed to sense. The envelope of what they can process is tiny. Sensors have flaws, humans programming those sensors have flaws. The human eye and visual cortex has been evolving for millions of years.
>I can go on and on, but really, I'm gonna throw a bullshit card here.
The biggest "bullshit" here is that you are claiming fantastical things that autonomous cars can do that have not been proven in the slightest. You implying that other people are stupid for being apprehensive about unproven tech is quite rich...
Just a side note, magical thinking doesn't quite mean what you're using it for. It's not synonymous with poor reasoning.
"A self driving car could kill me or others at any moment, and whether that happens is completely outside my control" is a reasonable fear. People may over-weight it compared to more familiar dangers that are in fact greater, but that's not magical thinking.
Magical thinking is more about believing in a causal relationship between unrelated events. Astrology is an example, but also things like "if I don't shave my beard my hockey team will win" or "I thought something bad about my kid and then they got hurt, I'm a terrible parent".
This narrative is so pervasive and I must admit a bit strange to me. Millions and millions of people strap themselves into thousands of pounds of metal and then drive 80 down the highway while texting and changing the radio and telling your kids to please stop fighting surrounded by other people doing the exact same thing. And we do that every single day and almost all of us are fine and will never have a 'major' accident. Humans are pretty fucking great at driving.
Besides to say that human are bad (or good) at driving would require some comparison, we're bad at it compared to what? As far as we know humans are the best drivers in universe. There's no evidence that self-driving is currently better than human drivers and there's no evidence that self-driving cars will soon be better than human drivers and there's no evidence that if we allow tons of self-driving cars on the roads then they will be better than humans. It's just blind hope, it really makes me want a word for whatever the opposite of a luddite is, instead of a belief that we can destroy machines to benefit people we believe we can destroy people to benefit machines. It's all twisted up.
If you want to be an Uber or Tesla crash test dummy and walk/bike across a test track while they see if their vehicle can avoid you then by all means go for it. But don't just foist that upon average people.
reply