Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 9, 2015

Machine Learning and Human Bias: An Uneasy Pair

Filed under: Bias,Machine Learning — Patrick Durusau @ 10:45 am

Machine Learning and Human Bias: An Uneasy Pair by Jason Baldridge.

From the post:

“We’re watching you.” This was the warning that the Chicago Police Department gave to more than 400 people on its “Heat List.” The list, an attempt to identify the people most likely to commit violent crime in the city, was created with a predictive algorithm that focused on factors including, per the Chicago Tribune, “his or her acquaintances and their arrest histories – and whether any of those associates have been shot in the past.”

Algorithms like this obviously raise some uncomfortable questions. Who is on this list and why? Does it take race, gender, education and other personal factors into account? When the prison population of America is overwhelmingly Black and Latino males, would an algorithm based on relationships disproportionately target young men of color?

There are many reasons why such algorithms are of interest, but the rewards are inseparable from the risks. Humans are biased, and the biases we encode into machines are then scaled and automated. This is not inherently bad (or good), but it raises the question: how do we operate in a world increasingly consumed with “personal analytics” that can predict race, religion, gender, age, sexual orientation, health status and much more.

Jason’s post is a refreshing step back from the usual “machine learning isn’t biased like people are,” sort of stance.

Of course machine learning is biased, always biased. The algorithms are biased themselves, to say nothing of the programmers who inexactly converted those algorithms into code. It would not be much of an algorithm if it could not vary its results based on its inputs. That’s discrimination no matter how you look at it.

The difference, at least in some cases, is that discrimination is acceptable in some cases and not others. One imagines that only women are eligible for birth control pill prescriptions. That’s a reasonable discrimination. Other bases for discrimination, not so much.

And machine learning is further biased by the data we choose to input to the already biased implementation of a biased algorithm.

That isn’t a knock on machine learning but a caveat when confronted with a machine learning result, look behind the result to the data, the implementation of the algorithm and the algorithm itself before taking serious action based on the result.

Of course, the first question I would ask is: “Why is this person showing me this result and want do they expect me to do based on it?”

That they are trying to help me on my path to becoming self-actualized isn’t my first reaction.

Yours?

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress