• EdwardZzzzz is totally right here, in that the machine may well perform an action that leads to a death.

    As will an aeroplane on autopilot or a chainsaw with a faulty safety cowling and trigger.

    The question is, I suppose, whether software is inherently more spooky and anthropomorphic when it does this and therefore more blameful?

    It's hard to imagine that the source code for the software will have a function called "kill someone". More likely it will have functions like "collision detector" and "emergency action implementor".

    I think that the decision path would be more " I am going to crash, slow down as much as you can" rather than "I am going to crash, is there something preferable I could crash into?".

  • It's hard to imagine that the source code for the software will have a function called "kill someone". More likely it will have functions like "collision detector" and "emergency action implementor".

    The emergeny action implementor is based around statistics just as the object detector. I expect most people are using a RNN. One probably trains it against a simulated and synthetic ground truth with reinforcement in the field: what is the correct response.

    I think that the decision path would be more " I am going to crash, slow down as much as you can" rather than "I am going to crash, is

    Slow down, of course.. but if it is clear that one can't slow down sufficiently .... People are not that great at this task. Machines can detect things better.

    there something preferable I could crash into?".

    Exactly. And one uses object detection here. School bus? Child? Old woman? Old man? There really is no way around this... it is unavoidable.. that is why we need to talk about ethics in this context!

About

Avatar for mashton @mashton started