-
Saw that story. Entirely unsurprising - machine learning systems are good at finding loopholes in the rules humans set for them. Like the system which was supposed to detect early signs of tuberculosis in chest x-rays; initially seemed to be successful, but it turned out it had just learned to spot when somebody had used an X-Ray device sited near a respiratory medicine ward. Systems supposed to learn to win at computer games but mostly learn to make them crash so that they don't lose, list goes on.
People in positions of power like AI because it offers them a way of avoiding accountability. A particularly lethal attitude when those systems will decide who to kill.
-
People in positions of power like AI because it offers them a way of avoiding accountability. A particularly lethal attitude when those systems will decide who to kill.
Maybe the AI works out that the leaders of the countries tend to kill more than the terrorists and decides a strike on politicians would save more lives.
AI-controlled US military drone ‘kills’ its operator in simulated test
...when the operator instructs it not to attack certain targets. When it's trained not to kill the operator, it takes out communications infrastructure to prevent the operator telling it what not to attack.
We're all fucked, aren't we?