You are reading a single comment by @Rik_Van_Looy and its replies. Click here to read the full conversation.
  • It's an odd one isn't it. I wonder if it is linked to a view that the human way to do something wrong is to fail on a moral basis, which is why it's very difficult to get driving prosecutions to stick on the basis that the driver is simply incompetent. If they didn't mean to hurt anyone then it's not their fault. In contrast, computers can't fail morally at something (the morality of embedding imperfect algorithms in them is on the developer) they can only fail technically and so the bar of actual performance is set correspondingly higher.

  • I've always assumed it's the premeditated sterility of an algorithm.
    A wrong judgement call in the heat of the moment is OK, but if you prioritise occupant safety in your self-driving algorithm it feels more like pre-meditated murder.

About