Hard-Coding Bias in Google “Algorithmic” Search Results.
Not that I want to get into analysis of hard-coding or not in search results but it is an interesting lead into issues a bit closer to home.
To what extent does subject identification have built-in biases that impact user communities?
Or less abstractly, how would we go about discovering and perhaps countering such bias?
For countering the bias you can guess that I would suggest topic maps. 😉
The more pressing question is and one that is relevant to topic map design, is how to discover our own biases?
What seems perfectly natural to me, with a background in law, biblical studies, networking technologies, markup technologies, and now semantic technologies, may seem so to other users.
To make matters worse, how do you ask a user about information they did not find?
Questions:
- How would you survey users to discover biases in subject identification? (3-5 pages, no citations)
- How would you discover what information users did not find? (3-5 pages, no citations)
- Class project: Design and test a survey for bias in a particular subject identification. (assuming permission from a library)
PS: There are biases in algorithms as well but we will cover those separately.