-
It's an odd one isn't it. I wonder if it is linked to a view that the human way to do something wrong is to fail on a moral basis, which is why it's very difficult to get driving prosecutions to stick on the basis that the driver is simply incompetent. If they didn't mean to hurt anyone then it's not their fault. In contrast, computers can't fail morally at something (the morality of embedding imperfect algorithms in them is on the developer) they can only fail technically and so the bar of actual performance is set correspondingly higher.
-
I've always assumed it's the premeditated sterility of an algorithm.
A wrong judgement call in the heat of the moment is OK, but if you prioritise occupant safety in your self-driving algorithm it feels more like pre-meditated murder.
The self driving thing I find interesting because as a society we (I don't mean us on this forum I mean the car loving society) seem to be able to accept a huge level of risk that cars pose to us as long as a human can be blamed.
How many innocent deaths are acceptable when humans are involved compared to when computers are in charge?
It's like taking the human control out of the equation suddenly makes "greater than zero" deaths unpalatable.