Creators of data visualizations currently receive little or no information about how well their audiences can read the visualizations they deploy. Yet visualization practitioners who seek research-backed design guidance must rely on stringent, overly generalized design recommendations from studies in data visualization. Misunderstood recommendations following visualization evaluation studies may not only artiﬁcially reduce the design space of possible chart types creators can use, but could also disadvantage sub-populations of readers whose abilities do not align with published visualization best practices. The large number of data visualizations produced and read each year represent an opportunity for improvement in our understanding of individual differences in chart-reading ability. One aspect is scale: people are encountering more data visualizations in their news, social media, television, and work than ever before. Another aspect is diversity: people have range of backgrounds and experience, which may shape their ability to effectively extract information from visualizations they encounter. This proposal explores using established computational visualization experiment protocols and robust statistical analysis techniques to quantify how well people interpret data visualizations. In particular, we explore a model where experiments are transformed from evaluating different visualization types (e.g. bars versus pies) to evaluating different people. The proposed controlled experiments will vary participant expertise, test hypothesized correlates of visualization performance (e.g. numeracy and spatial ability), and use transparent statistical methodologies to establish dimensions of individual differences in visualization performance.
|Effective start/end date||10/1/20 → 8/31/23|
- National Science Foundation (IIS-2120750-000)
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.