New version of Get-Another-Label available by Panos Ipeirotis.
From the post:
I am often asked what type of technique I use for evaluating the quality of the workers on Mechanical Turk (or on oDesk, or …). Do I use gold tests? Do I use redundancy?
Well, the answer is that I use both. In fact, I use the code “Get-Another-Label” that I have developed together with my PhD students and a few other developers. The code is publicly available on Github.
We have updated the code recently, to add some useful functionality, such as the ability to pass (for evaluation purposes) the true answers for the different tasks, and get back answers about the quality of the estimates of the different algorithms.
Panos continues his series on the use of crowd sourcing.
Just a thought experiment at the moment but could semantic gaps between populations be “discovered” by use of crowd sourcing?
That is to create tasks that require “understanding” some implicit semantic in the task and then collecting the answer.
There being no “incorrect” answers but answers that reflect the differing perceptions of the semantics of the task.
A way to get away from using small groups of college students for such research? (Nothing against small groups of college students but they best represent small groups of college students. May need a broader semantic range.)