This is actually all very fascinating. But to stay slightly prosaic and pertinent, AI think you may have argued yourself in a circle Edwardz. If the behaviour is 'learned' via an RNN, with base truths that are not in themselves dangerous, harmful or evil. And it has passed some form of certification that the behaviour of the learned algorithm is appropriate and acceptable in a percentage if cases that is over a threshold, then where can blame be attributed at all?
I suppose the ethical question lies in either the base truths (don't hit people; hitting fewer people is better) or in the correctness assessment criteria (did you hit noone, did you hit the least number of people possible). These should all be quantised or discrete enough to be judged on their merits, though?
This is actually all very fascinating. But to stay slightly prosaic and pertinent, AI think you may have argued yourself in a circle Edwardz. If the behaviour is 'learned' via an RNN, with base truths that are not in themselves dangerous, harmful or evil. And it has passed some form of certification that the behaviour of the learned algorithm is appropriate and acceptable in a percentage if cases that is over a threshold, then where can blame be attributed at all?
I suppose the ethical question lies in either the base truths (don't hit people; hitting fewer people is better) or in the correctness assessment criteria (did you hit noone, did you hit the least number of people possible). These should all be quantised or discrete enough to be judged on their merits, though?