Soylent: A Word Processor with a Crowd Inside
I know, I know, won’t even go there. As the librarians say: “Look it up!”
From the abstract:
This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.
When I first started reading the article, it seemed obvious to me that the Human Macro option could be useful for topic map authoring. At least if the tasks were sufficiently constrained.
I was startled to see a 30% error rate for the “corrections” was considered a baseline, hence the necessity for correction/control mechanisms.
The authors acknowledge that the bottom line cost of out-sourcing may weigh against its use in commercial contexts.
Perhaps so but I would run the same tests against published papers and books. To determine the error rate without an out-sourced correction loop.
I think the idea is basically sound, although for some topic maps it might be better to place qualification requirements on the outsourcing.