-
Surely if it's a neural network its heavily dependant on what data it's been trained on right? So there's always going to be an inherent bias as a result of what data has been chosen for it to be trained on.
Haven't yet read the article by the way so apologies if I'm way off the mark as a result.
Edit: Should've read the article, last paragraph here:
“Dataset is generally biased which leads to bias in the models trained. The methodology can also lead to bias,” Jolicoeur-Martineau said in an email. “However, bias inherently comes from the researcher themselves which is why we need more diversity. If an all-white set of male researchers work on project, it's likely that they will not think about the bias of their dataset or methodology.”
-
There is obviously a problem there, but there are actually loads of different problems layered into it. Fundamentally, there is a many-to-one issue which results from the loss of information when you pixelate. I'm no expert on AI, but if you train on a dataset that's representative of the population it may always avoid producing faces from a minority group.
Facial characteristics can (under some models) be represented as a deviation from a gender/ethnic norm. So if you try to infer ethnicity and gender first you may get a higher likelihood of a correct outcome, but then there's the issue that you have an unknown lighting source, which means that skin tone is really hard to read.
Essentially this is a great example of something that humans are good at and computers aren't (yet). I suspect, however, that the example of Barack Obama is fooling us as to the size of that disparity because it triggers a whole load of contextual information for our recognition that computers have no access to.
-
https://thegradient.pub/pulse-lessons/
Some more discussion on how the community responded to this
AI, everybody. Its totally fine and unbiased in any way shape or form.
https://www.vice.com/en_us/article/7kpxyy/this-image-of-a-white-barack-obama-is-ais-racial-bias-problem-in-a-nutshell