You are reading a single comment by @salmonchild and its replies. Click here to read the full conversation.
  • There is plenty of soft and hard evidence at this point that algorithms and AI-as-exists codify and extend biasas of the people who write that code, whilst being ostensibly impartial because it's a "digital" process.

    The idea of AI deciding which applications even get seen by a person without any meaningful oversight is pretty awful in my opinion.

    Let alone people who presumably just won't show up at all because they decided that maybe they don't want to share anything on Facebook/LinkdIn etc etc

  • I don't think it has much to do with the bias of the coder, I understand that most issues currently come from the AI looking to replicate the current situation in terms of successful applicants so just continue to feed square white men into the interview pipeline.

  • People choose the training data sets and the rules that judge the responses. Plenty room for unintended bias

  • Exactly - any output of an AI is at most as good as the data used to train it with. So if your training data contains biases, those will be present in the output as well. Avoiding bias is becoming a big field in itself.

    For a semi funny semi concerning example that isn't even complicated (like hiring decisions would be): "HP computers are racist"

About

Avatar for salmonchild @salmonchild started