The big data era complicates the use of scatter plots because of the vast datasets involved, requiring significant streamlining if researchers are to glean useful information. Although algorithmic methods have been developed to identify plots with one or more patterns of interest to researchers, little attention has been devoted to validating their results by comparing them to those achieved when human observers and analysts read large sets of plots and their patterns.
A new study conducted by data-visualization researchers at New York University Polytechnic Institute's Tandon School of Engineering found outcomes acquired via algorithmic methods do not always correlate well to human perceptional assessments when grouping scatter plots according to similarity.
The team, led by professor Enrico Bertini, cited variables influencing such perceptual judgments, but they argue further work is necessary to develop perceptually balanced measures for analyzing large sets of plots, in order to better guide researchers who must routinely deal with high-dimensional data.
The researchers' 2016 paper will receive an honorable mention at the ACM CHI 2016 conference, which takes place May 7–12 in San Jose, CA.
From NYU Polytechnic School of Engineering
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found