Designing incentives for inexpert human raters

Aaron D. Shaw, John J. Horton, Daniel L. Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

163 Scopus citations


The emergence of online labor markets makes it far easier to use individual human raters to evaluate materials for data collection and analysis in the social sciences. In this paper, we report the results of an experiment-conducted in an online labor market-that measured the effectiveness of a collection of social and financial incentive schemes for motivating workers to conduct a qualitative, content analysis task. Overall, workers performed better than chance, but results varied considerably depending on task difficulty. We find that treatment conditions which asked workers to prospectively think about the responses of their peers-when combined with financial incentives - produced more accurate performance. Other treatments generally had weak effects on quality. Workers in India performed significantly worse than US workers, regardless of treatment group.

Original languageEnglish (US)
Title of host publicationProceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, CSCW 2011
PublisherAssociation for Computing Machinery
Number of pages10
ISBN (Print)9781450305563
StatePublished - 2011
EventACM 2011 Conference on Computer Supported Cooperative Work, CSCW 2011 - Hangzhou, China
Duration: Mar 19 2011Mar 23 2011

Publication series

NameProceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW


OtherACM 2011 Conference on Computer Supported Cooperative Work, CSCW 2011


  • Amazon mechanical turk
  • Content analysis
  • Crowdsourcing
  • Experimentation
  • Human computation
  • Search
  • Sociology

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Networks and Communications


Dive into the research topics of 'Designing incentives for inexpert human raters'. Together they form a unique fingerprint.

Cite this