Novice and expert sensemaking of crowdsourced feedback

Research output: Contribution to journalArticlepeer-review

16 Scopus citations


Online feedback exchange (OFE) systems are an increasingly popular way to test concepts with millions of target users before going to market. Yet, we know little about how designers make sense of this abundant feedback. This empirical study investigates how expert and novice designers make sense of feedback in OFE systems. We observed that when feedback conflicted with frames originating from the participant's design knowledge, experts were more likely than novices to question the inconsistency, seeking critical information to expand their understanding of the design goals. Our results suggest that in order for OFE systems to be truly effective, they must be able to support nuances in sensemaking activities of novice and expert users.

Original languageEnglish (US)
Article number45
JournalProceedings of the ACM on Human-Computer Interaction
Issue numberCSCW
StatePublished - Nov 2017


  • Assessment
  • Crowdsourced feedback
  • Crowdsourcing
  • Design
  • Expert
  • Learning
  • Novice
  • Online feedback exchange
  • Sensemaking

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Networks and Communications
  • Social Sciences (miscellaneous)


Dive into the research topics of 'Novice and expert sensemaking of crowdsourced feedback'. Together they form a unique fingerprint.

Cite this