Exactly - any output of an AI is at most as good as the data used to train it with. So if your training data contains biases, those will be present in the output as well. Avoiding bias is becoming a big field in itself.
For a semi funny semi concerning example that isn't even complicated (like hiring decisions would be): "HP computers are racist"
Exactly - any output of an AI is at most as good as the data used to train it with. So if your training data contains biases, those will be present in the output as well. Avoiding bias is becoming a big field in itself.
For a semi funny semi concerning example that isn't even complicated (like hiring decisions would be): "HP computers are racist"