-
See also USMC vs DARPA AI enabled robot sentries.
"The AI had been trained to detect humans walking," Scharre wrote. "Not humans somersaulting, hiding in a cardboard box, or disguised as a tree. So these simple tricks, which a human would have easily seen through, were sufficient to break the algorithm.
Two of the Marines did somersaults for 300 meters. Two more hid under a cardboard box, giggling the entire time. Another took branches from a fir tree and walked along, grinning from ear to ear while pretending to be a tree"
The AI situation is interesting, I think it's pretty easy to be too relaxed about it, but working out how worried to be is the challenge.
In our line of work most of what people call AI is ML, and that can be a problem - in an active-adversary scenario where the threat-actor is iterating per-attack, if they find something that works you are in trouble until you can re-train your model.
So for example if the TA is iterating every 10 minutes and you train your model daily, there's a mismatch.
I quite like adversarial makeup as an example of why this is A Problem - the TA effectively paints a few squares on their face using black and white grease-paint and then walks straight past ED209, who literally doesn't see the person as a person, but rather just part of the background noise.
If you train ED every 24 hours then a lot of people can walk past him, should there be enough greasepaint available.
Of course, AI is self training, according to some definitions anyway, but it's also very hard to get it to do that (in our line of work, anyway).