Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

November 15, 2018

The Unlearned Lesson Of Amazon’s automated hiring tool

Filed under: Artificial Intelligence,Diversity,Machine Learning — Patrick Durusau @ 1:57 pm

Gender, Race and Power: Outlining a New AI Research Agenda.

From the post:


AI systems — which Google and others are rapidly developing and deploying in sensitive social and political domains — can mirror, amplify, and obscure the very issues of inequality and discrimination that Google workers are protesting against. Over the past year, researchers and journalists have highlighted numerous examples where AI systems exhibited biases, including on the basis of race, class, gender, and sexuality.

We saw a dramatic example of these problems in recent news of Amazon’s automated hiring tool. In order to “learn” to differentiate between “good” and “bad” job candidates, it was trained on a massive corpus of of (sic) data documenting the company’s past hiring decisions. The result was, perhaps unsurprisingly, a hiring tool that discriminated against women, even demoting CVs that contained the word ‘women’ or ‘women’s’. Amazon engineers tried to fix the problem, adjusting the algorithm in the attempt to mitigate its biased preferences, but ultimately scrapped the project, concluding that it was unsalvageable.

From the Amazon automated hiring tool and other examples, the AI Now Institute draws this conclusion:


It’s time for research on gender and race in AI to move beyond considering whether AI systems meet narrow technical definitions of ‘fairness.’ We need to ask deeper, more complex questions: Who is in the room when these technologies are created, and which assumptions and worldviews are embedded in this process? How does our identity shape our experiences of AI systems? In what ways do these systems formalize, classify, and amplify rigid and problematic definitions of gender and race? We share some examples of important studies that tackle these questions below — and we have new research publications coming out to contribute to this literature.

AI New misses the most obvious lesson from the Amazon automated hiring tool experience:

In the face of an AI algorithm that discriminates, we don’t know how to cure its discrimination.

Predicting or curing discrimination from an algorithm alone lies beyond our ken.

The creation of reference datasets for testing AI algorithms, however, enables testing and comparison of algorithms. With concrete results that could be used to reduce discrimination in fact.

Actual hiring and other databases are private for good reasons but wholly artificial reference databases would have no such concerns.

Since we don’t understand discrimination in humans, I caution against a quixotic search for its causes in algorithms. Keep or discard algorithms based on their discrimination in practice. Something we have been shown to be capable of spotting.

PS: Not all discrimination is unethical or immoral. If a position requires a law degree, it is “discrimination” to eliminate all applicants without one, but that’s allowable discrimination.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress