You are reading a single comment by @salmonchild and its replies.
Click here to read the full conversation.
-
Exactly - any output of an AI is at most as good as the data used to train it with. So if your training data contains biases, those will be present in the output as well. Avoiding bias is becoming a big field in itself.
For a semi funny semi concerning example that isn't even complicated (like hiring decisions would be): "HP computers are racist"
There is plenty of soft and hard evidence at this point that algorithms and AI-as-exists codify and extend biasas of the people who write that code, whilst being ostensibly impartial because it's a "digital" process.
The idea of AI deciding which applications even get seen by a person without any meaningful oversight is pretty awful in my opinion.
Let alone people who presumably just won't show up at all because they decided that maybe they don't want to share anything on Facebook/LinkdIn etc etc