Novice and Expert Sensemaking of Crowdsourced Design Feedback

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Online feedback exchange (OFE) systems are an increasingly popular way to test concepts with millions of target users before going to market. Yet, we know little about how designers make sense of this abundant feedback. This empirical study investigates how expert and novice designers make sense of feedback in OFE systems. We observed that when feedback conflicted with frames originating from the participant's design knowledge, experts were more likely than novices to question the inconsistency, seeking critical information to expand their understanding of the design goals. Our results suggest that in order for OFE systems to be truly effective, they must be able to support nuances in sensemaking activities of novice and expert users.
Original languageEnglish (US)
Title of host publicationProceedings of the ACM on Human-Computer Interaction
PublisherACM
StatePublished - 2017

Fingerprint Dive into the research topics of 'Novice and Expert Sensemaking of Crowdsourced Design Feedback'. Together they form a unique fingerprint.

Cite this