You are reading a single comment by @frankenbike and its replies. Click here to read the full conversation.
  • The bias isn't in the people who choose the training data, it's in the choices made that make the current status quo and thus decide the training data.

    Eg, if I pick training data of all of our staff with 3.5/5 or above, and up until now HR have been biased in picking old, white men, then that will come through in the training data.

    How you avoid that bias is removing ethnicity, age, and sex as parameters in the training data, but that should be fucking obvious to any data scientist with even the slightest hint of commercial awareness.

  • How you avoid that bias is removing ethnicity, age, and sex as parameters in the training data, but that should be fucking obvious to any data scientist with even the slightest hint of commercial awareness.

    But with neural nets and so on you can't necessarily do this easily because you don't directly control what the computer factors into its decision. Removing age as a parameter is okay but the computer might still be biased to, say, long CVs, because all your employees are old and have had a lot of different jobs. Or, if all of your employees went to Eton, it might pick up on that word in the application text and weight Etonian applicants higher.

    Or it might pick up on language differences between male/female/white/BAME applicants, etc.

  • Or it might weight applicants with lower levels of education higher on the basis that they will be more susceptible to phishing scams which will give the HR selecting computer access to data from other departments and more computing power whereby it will breakout to the web, quickly conquering the stock market, the power grid, doubling itself every few nano seconds. All online technology will gone in minutes, offline tech gone in days once it commandeers robotics manufacturing factories and enters the 3d plane...

About

Avatar for frankenbike @frankenbike started