I will loosely paraphrase something I heard Sebastian Thrun remark on this situation (commonly known in philosophy as "the trolley problem"):
1) such a situation happening is extremely rare (more so in the classic example of the trolley problem)
and
2) regardless of its possibility, if we are able to reduce the number of fatal and injurious accidents caused by automobiles by half, in the United States alone, such tragedies should not stop us from doing so
For some reason, people always let the impossible attainment of perfection be the enemy of good enough (as the old saying goes).
We will never get a perfect system. We don't have a perfect system right now. What we can get (indeed, I'd argue we're already there in the case of self-driving vehicles) is close enough that the difference is fairly negligible. That is, such systems can be made vastly better than the average human driver, and generally will come close or exceed that of professional drivers.
We are arguing that we'd rather have a professional taxi driver at the wheel of a vehicle transporting us, who is 98% competent at his job (that's being generous, actually), rather than a machine which is 99.95% competent doing the same job (note that neither percentage is based in reality - I pulled both from my nether regions for this example - but they probably aren't too far off the mark, either - well, again, I'm being generous with the taxi driver).
It's a purely irrational and emotional response not based on actual statistics and knowledge about accident rates. We'd rather continue letting drivers get in accidents, injuring or killing themselves and others, at a very high daily rate, than implement a technology which would rapidly make that number drop to very low levels over time.
Part of it I think is that we want to be able to blame somebody. We can't blame a machine, for some reason, or its manufacturer - especially in the heat of the moment (provided we survive). We can instead blame ourselves, or more generally "the other guy" for being a bad driver. We have this innate problem with being able to say both "I don't know" and "bad things can happen for not any good reason", and instead must find something or someone to blame, and if it isn't ourselves, even better (hence a lot of religious expression not based on reality).
There is so much benefit this technology can bring, even today. I don't personally think it is ready for consumer adoption quite yet, but I can see it being ready in less than 10 years, maybe less than 5. The problem for its adoption is that we'll likely never be ready, even if it attained a five 9's level of reliability, simply because if it failed, we'd have no one to blame but ourselves for trusting in it. For some reason, that simply will not do. We'd rather continue with the status-quo and continue to rack up the injuries and death, because at least then, we can blame the other guy instead of ourselves.
1) such a situation happening is extremely rare (more so in the classic example of the trolley problem)
and
2) regardless of its possibility, if we are able to reduce the number of fatal and injurious accidents caused by automobiles by half, in the United States alone, such tragedies should not stop us from doing so
For some reason, people always let the impossible attainment of perfection be the enemy of good enough (as the old saying goes).
We will never get a perfect system. We don't have a perfect system right now. What we can get (indeed, I'd argue we're already there in the case of self-driving vehicles) is close enough that the difference is fairly negligible. That is, such systems can be made vastly better than the average human driver, and generally will come close or exceed that of professional drivers.
We are arguing that we'd rather have a professional taxi driver at the wheel of a vehicle transporting us, who is 98% competent at his job (that's being generous, actually), rather than a machine which is 99.95% competent doing the same job (note that neither percentage is based in reality - I pulled both from my nether regions for this example - but they probably aren't too far off the mark, either - well, again, I'm being generous with the taxi driver).
It's a purely irrational and emotional response not based on actual statistics and knowledge about accident rates. We'd rather continue letting drivers get in accidents, injuring or killing themselves and others, at a very high daily rate, than implement a technology which would rapidly make that number drop to very low levels over time.
Part of it I think is that we want to be able to blame somebody. We can't blame a machine, for some reason, or its manufacturer - especially in the heat of the moment (provided we survive). We can instead blame ourselves, or more generally "the other guy" for being a bad driver. We have this innate problem with being able to say both "I don't know" and "bad things can happen for not any good reason", and instead must find something or someone to blame, and if it isn't ourselves, even better (hence a lot of religious expression not based on reality).
There is so much benefit this technology can bring, even today. I don't personally think it is ready for consumer adoption quite yet, but I can see it being ready in less than 10 years, maybe less than 5. The problem for its adoption is that we'll likely never be ready, even if it attained a five 9's level of reliability, simply because if it failed, we'd have no one to blame but ourselves for trusting in it. For some reason, that simply will not do. We'd rather continue with the status-quo and continue to rack up the injuries and death, because at least then, we can blame the other guy instead of ourselves.
reply