Critique Style Guide: Improving Crowdsourced Design Feedback with a Natural Language Model

Markus Krause, Tom Garncarz, JiaoJiao Song, Elizabeth M Gerber, Brian P. Bailey, Stephen P. Dow

Research output: Chapter in Book/Report/Conference proceedingConference contribution

39 Scopus citations


Designers are increasingly leveraging online crowds; yet, online contributors may lack the expertise, context, and sensitivity to provide effective critique. Rubrics help feedback providers but require domain experts to write them and may not generalize across design domains. This paper introduces and tests a novel semi-automated method to support feedback providers by analyzing feedback language. In our first study, 52 students from two design courses created design solutions and received feedback from 176 online providers. Instructors, students, and crowd contributors rated the helpfulness of each feedback response. From this data, an algorithm extracted a set of natural language features (e.g., specificity, sentiment etc.) that correlated with the ratings. The features accurately predicted the ratings and remained stable across different raters and design solutions. Based on these features, we produced a critique style guide with feedback examples - automatically selected for each feature - to help providers revise their feedback through self-assessment. In a second study, we tested the validity of the guide through a between-subjects experiment (n=50). Providers wrote feedback on design solutions with or without the guide. Providers generated feedback with higher perceived helpfulness when using our style-based guidance.
Original languageEnglish (US)
Title of host publicationProceedings of the 2017 CHI Conference on Human Factors in Computing Systems
Number of pages13
ISBN (Print)978-1450346559
StatePublished - 2017


Dive into the research topics of 'Critique Style Guide: Improving Crowdsourced Design Feedback with a Natural Language Model'. Together they form a unique fingerprint.

Cite this