We moved to Seattle! We packed our bags and headed north to become the University of Washington Interactive Data Lab. Come visit us...

Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design

Experimental stimuli in which participants were asked to estimate what percentage the smaller value was of the larger.

abstract

Understanding perception is critical to effective visualization design. With its low cost and scalability, crowdsourcing presents an attractive option for evaluating the large design space of visualizations; however, it first requires validation. In this paper, we assess the viability of Amazon’s Mechanical Turk as a platform for graphical perception experiments. We replicate previous studies of spatial encoding and luminance contrast and compare our results. We also conduct new experiments on rectangular area perception (as in treemaps or cartograms) and on chart size and gridline spacing. Our results demonstrate that crowdsourced perception experiments are viable and contribute new insights for visualization design. Lastly, we report cost and performance data from our experiments and distill recommendations for the design of crowdsourced studies.

materials and links

citation

ACM Human Factors in Computing Systems (CHI), 203–212, 2010
PDF (607.4 KB) | Best Paper Nominee