Face to face: Evaluating visual comparison

Brian Ondov, Nicole Jardine, Niklas Elmqvist, Steven Franconeri

Research output: Contribution to journalArticlepeer-review

17 Scopus citations


Data are often viewed as a single set of values, but those values frequently must be compared with another set. The existing evaluations of designs that facilitate these comparisons tend to be based on intuitive reasoning, rather than quantifiable measures. We build on this work with a series of crowdsourced experiments that use low-level perceptual comparison tasks that arise frequently in comparisons within data visualizations (e.g., which value changes the most between the two sets of data?). Participants completed these tasks across a variety of layouts: overlaid, two arrangements of juxtaposed small multiples, mirror-symmetric small multiples, and animated transitions. A staircase procedure sought the difficulty level (e.g., value change delta) that led to equivalent accuracy for each layout. Confirming prior intuition, we observe high levels of performance for overlaid versus standard small multiples. However, we also find performance improvements for both mirror symmetric small multiples and animated transitions. While some results are incongruent with common wisdom in data visualization, they align with previous work in perceptual psychology, and thus have potentially strong implications for visual comparison designs.

Original languageEnglish (US)
Article number8440856
Pages (from-to)861-871
Number of pages11
JournalIEEE Transactions on Visualization and Computer Graphics
Issue number1
StatePublished - Jan 2019


  • Crowdsourced evaluation
  • Graphical perception
  • Visual comparison
  • Visual perception

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Face to face: Evaluating visual comparison'. Together they form a unique fingerprint.

Cite this