Early pioneers in computing, such as Herb Simon and Allen Newell, realized that human cognition could be framed in terms of information processing. Today, research like that described in the following paper is demonstrating the possibilities of seamlessly connecting human and machine information processors to accomplish creative tasks in ways previously unimaginable. This research is made possible by the rise of online crowdsourcing, in which millions of workers worldwide can be recruited for nearly any imaginable task. For some kinds of work and some fields of computer science these conditions have led to a renaissance in which large amounts of information are parallelized and labeled by human workers, generating unprecedented training sets for domains ranging from natural language processing to computer vision.
However, complex and creative tasks such as writing or design are not so straightforward to decompose and parallelize. Imagine, for example, a crowd of 100 workers let loose on your next paper with instructions to improve anything they find. You can quickly envision the challenges with coordinating crowds, including avoiding duplication of effort, dealing with conflicting viewpoints, and creating a system robust to any individual's lack of global context or expertise. Thus, the parallelized independent tasks typical of crowdsourcing today seem a poor match for the rich interactivity required for writing and editing tasks. These coordination and interactivity issues have been critical barriers to harnessing the power of crowds for complex and creative real-world tasks.
The authors introduce and realize an exciting vision of using crowd workers to power an interactive system—here, a word processor—in accomplishing complex cognitive tasks such as intelligently shortening text or acting as flexible "human macro." This vision goes beyond previous "Wizard of Oz"-style approaches (in which humans are used to prototype functionality that is difficult to program) to permanently wiring human cognition into interactive systems. Such "crowd-powered systems" could enable the creation of entirely new forms of computational support not yet possible, and to build up training data that could help develop AI.
A central challenge in realizing this vision is coordinating crowds to accomplish interdependent tasks that cannot be easily decomposed; for example, a paragraph in which one sentence needs to flow into the next. The authors introduce a crowd programming pattern called Find-Fix-Verify, which breaks down tasks such that some workers identify areas that need transformation, others transform the most commonly identified areas, and others select the best transformations. Although no single individual need work on (or even read) the entire article, effort is focused into key areas while maintaining context within those areas. The authors show evidence that using this pattern crowds could collectively accomplish tasks with high-quality output—including shortening text, proofreading, and following open-ended instructions—despite relatively high individual error rates.
One might ask how such an approach scales in terms of time or complexity. In terms of time, crowd marketplaces can suffer from a latency problem in waiting for tasks to be accepted by workers, and indeed this accounted for the bulk of time in each condition (~20 minutes). In terms of complexity, the authors acknowledge an important limitation in the degree of interdependence supported; for example, changes requiring modification of large areas or related but separate areas can lead to quality issues.
However, the field (including the authors) has since made tremendous progress in scaling up the speed, quality, and complexity of crowd work. The time needed to recruit a crowd worker has dropped from minutes to seconds following the development of methods such as paying workers to be on retainer, enabling time-sensitive applications such as helping blind users navigate their surroundings. The quality of crowd work has increased by orders of magnitude due to research ranging from improved task design (for example, using Bayesian Truth Serum where workers predict others' answers), to leveraging workers' behavioral traces (for example, looking at the way workers do their work instead of their output), to inferring worker quality across tasks and reweighting their influence accordingly.
Perhaps the most important question for the future of crowd work is whether it is capable of scaling up to the highly complex and creative tasks embodying the pinnacle of human cognition, such as science, art, and innovation. As the authors, myself, and others have argued (for example, in The Future of Crowd Work), doing so may be critical to enabling crowd workers to engage in the kinds of fulfilling, impactful work we would desire for our own children. Realizing this future will require highly interdisciplinary research into fundamental challenges ranging from incentive design to reputation systems to managing interdependent workflows. Such research will be complicated by but ultimately more impactful for grappling with the shifting landscape and ethical issues surrounding global trends towards decentralized work. Promisingly, there have been a number of recent examples of research using crowds to accomplish complex creative work including journalism, film animation, design critique, and even inventing new products. However, the best (or the worst) may be yet to come: we stand now at an inflection point where, with a concerted effort, computing research could tip us toward a positive future of crowd-powered systems.
To view the accompanying paper, visit doi.acm.org/10.1145/2791285
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.
No entries found