We moved to Seattle! We packed our bags and headed north to become the University of Washington Interactive Data Lab. Come visit us...

Visual Embedding: A Model for Visualization

Çağatay Demiralp, Carlos Scheidegger, Gordon Kindlmann, David Laidlaw, Jeffrey Heer
Neural tracts colored by visual embedding of shape distances into CIELAB color space.

abstract

We propose visual embedding as a model for automatically generating and evaluating visualizations. A visual embedding is a function from data points to a space of visual primitives that measurably preserves structures in the data (domain) within the mapped perceptual space (range). Visual embedding can serve as both a generative and an evaluative model. We demonstrate its use with three examples: coloring of neural tracts, scatter plots with icons, and evaluation of alternative diffusion tensor glyphs. We discuss several techniques for generating visual embedding functions, including probabilistic graphical models for embedding within discrete visual spaces. We also describe two complementary approaches - crowdsourcing and visual product spaces - for building visual spaces with associated perceptual distance measures. Finally, we present future research directions for further developing the visual embedding model.

materials and links

citation

Çağatay Demiralp, Carlos Scheidegger, Gordon Kindlmann, David Laidlaw, Jeffrey Heer
Computer Graphics and Applications, 2014