Yeah but in this case, the impact is the same whether you die from human error or automation error. I would rather reduce the overall probability of death by 50% even if probability death by malfunctioning equipment goes up by 1%.
If the chance of an accident increases then sure, be against that. And I've no idea what the chances are. But to say you'd rather have a higher chance of dying from human error than lower chance of dying from software error... doesn't make any sense.
Not quite what I was thinking of, so I'll make up a scenario and numbers.
Take any technology that automates a task. Say it has a 0.00005% failure rate which results in death of a random passerby at the time it occurs. Then take any human being in charge of that same task. Say the risk is 0.05% human error which results in death of a random passerby at the time it occurs.
There are people who will argue that the 0.05% is acceptable and will vehemently disagree with and prevent any attempt to switch to the 0.00005% failure rate because the machine killing someone is unacceptable while a human killing 1,000-fold more people is somehow acceptable. Even if human laziness of an assisted (not completely automated) version of the same task had a 0.005% risk it would still be unacceptable when compared with the 100% human-error rate.
Morally, I can't wrap my head around the reasoning. More deaths seems to be acceptable as long as some person can be "at fault". The moment the death is "blameless" it cannot be allowed.
In other words: 10,000 "someone can be blamed" deaths is better than 10 "who is to blame?" deaths.
That's too idealistic. The reduction of accidents caused by human errors could very well be offset by a substantial increase in machine/OS/SW/HW caused accidents.
I can see where you're coming from, but we do the same with airplanes (disastrously with the 737Max) or medical machines or train signaling systems. It seems like these systems (and their bugs) will inevitably kill some people. But will they prevent more deaths than they cause? That's a fairer question if you don't mind me saying it...
Death-by-person probably less controllable than death-by-equipment-failure. But, in reality, I'm not sure that it is.
There are processes/procedures to reduce the likelihood of both.
Even if the cumulative amounts of fatal accidents is reduced, certain types of accidents happen less, if different types of accidents happen more often, say accelerating into a wall uncontrollably, that is not optimal. I worry that we will accept 'slightly better' as good enough, along with the consequences that come with it.
At some point, economics and actuaries come into play. It will be worrying when 'AI is safer than humans!' can be pointed to as a reason to not correct expensive, perhaps intractable, AI failures.
Isn't one of the major benefits of computer automation that such adjustments (e.g. adding a safety feature that uses existing hardware) can be made with little cost/effort? If a few lines of code will save 2 lives a year, is it worth it?
I get where you're coming from and you ask a very interesting question: where do we draw the line? I think a good place to draw it is after we've eliminated accidental fatalities.
I also agree on this. I think in terms of liability humans who one can sue when they make a mistake is more valuable than a machine.
That's why in life critical applications companies who are capable of taking the risk are scarce, because when accidents happen, the company has to take responsibility. It cannot be resolved by just firing employees.
There will always be a long tail where the machines fail in scenarios a human can handle. We're just going to write off those deaths as an act of nature?
I don't think this is true, people are much less forgiving when machines make fatal mistakes. Also people will be hostile towards the perceived loss of autonomy.
I hope what they want is to reduce the likelihood of human error causing catastrophic failure. Not necessarily to take humans out of the loop altogether.
As your story illustrates, humans and machines both have failure rates. Sometimes the presence of humans actually makes a system less safe. The meltdown at Three Mile Island, for example, would not have happened if a panicked human operator had not overridden the automatic emergency cooling system.
"Sure you save some lives but how many will be late!" I do not like this math. For example the accident may not be once per century. And it is good for the bureaucracy to be risk adverse if it comes to my life.
The answer is that they should improve the technology so that the system is safe and faster.
Well, until it's fully automated, human safety is still important.
I think the problem we have is that destroying a human body with mindless repetitive tasks is often still "cheaper" than engineering an automated solutions. It is simply cheaper to feed off desperation than to value human lives, and there is the engrained notion that humanity cannot be productive without putting humans into positions of desperation. This is a mindset issue that needs to eventually change.
The inverse is true as well. If the tech progresses where it's statistically safer then the engineer has saved more lives. Depends on how you frame it. Idk how it should be sorted out legally.
But we would never reap the benefits in the meantime. That was my main point. And this becomes crucial if the saving of deaths from accident is substantial as compared to deaths that happen due to AI errors.
reply