Abstract
Online feedback exchange (OFE) systems are an increasingly popular way to test concepts with millions of target users before going to market. Yet, we know little about how designers make sense of this abundant feedback. This empirical study investigates how expert and novice designers make sense of feedback in OFE systems. We observed that when feedback conflicted with frames originating from the participant's design knowledge, experts were more likely than novices to question the inconsistency, seeking critical information to expand their understanding of the design goals. Our results suggest that in order for OFE systems to be truly effective, they must be able to support nuances in sensemaking activities of novice and expert users.
Original language | English (US) |
---|---|
Article number | 45 |
Journal | Proceedings of the ACM on Human-Computer Interaction |
Volume | 1 |
Issue number | CSCW |
DOIs | |
State | Published - Nov 2017 |
Keywords
- Assessment
- Crowdsourced feedback
- Crowdsourcing
- Design
- Expert
- Learning
- Novice
- Online feedback exchange
- Sensemaking
ASJC Scopus subject areas
- Human-Computer Interaction
- Computer Networks and Communications
- Social Sciences (miscellaneous)