Mathwashing: How Algorithms Can Hide Gender and Racial Biases by Kimberley Mok.
Scholars have long pointed out that the way languages are structured and used can say a lot about the worldview of their speakers: what they believe, what they hold sacred, and what their biases are. We know humans have their biases, but in contrast, many of us might have the impression that machines are somehow inherently objective. But does that assumption apply to a new generation of intelligent, algorithmically driven machines that are learning our languages and training from human-generated datasets? By virtue of being designed by humans, and by learning natural human languages, might these artificially intelligent machines also pick up on some of those same human biases too?
It seems that machines can and do indeed assimilate human prejudices, whether they are based on race, gender, age or aesthetics. Experts are now finding more evidence that supports this phenomenon of algorithmic bias. As sets of instructions that help machines to learn, reason, recognize patterns and perform tasks on their own, algorithms increasingly pervade our lives. And in a world where algorithms already underlie many of those big decisions that can change lives forever, researchers are finding that many of these algorithms aren’t as objective as we assume them to be.
…
If you have ever suffered from the delusion that algorithms, any algorithm is “objective,” this post is a must read. Or re-read to remind yourself that “objectivity” is a claim used to put your position beyond question for self-interest. Nothing more.
For my part, I’m not sure what’s unclear about data collected, algorithms chosen, interpretation of results, all being the results of bias?
There may be acceptable biases, or degrees of bias, but the goal of any measurement is a result, which automatically biases a measurer in favor of phenomena that can be measured by a convenient technique. Phenomena that cannot be easily measured, no matter how important, won’t be included.
By the same token, “bias-correction” is the introduction of an acceptable bias and/or limiting bias to what is seen as, to the person judging the presence of bias, to an acceptable level of bias.
Bias is omnipresent and while evaluating algorithms is important, always bear in mind you are choosing acceptable bias over unacceptable bias.
Or to mis-quote the Princess Bride: “Bias is everywhere. Anyone who says differently is selling something.”