Schedule of Events | Search Abstracts | Talk Sessions | Poster Sessions
Talk Session, Wednesday, May 20, 1:15 – 2:30 pm
Visualization & User Interface Design
Talk 1, 1:15 pm
Applying vision science to public policy: How visual implied motion in a new accessibility icon drives attention, interpretations, and motor responses
Marina Pace1 (), Tessa Bury, Benjamin van Buren1; 1The New School for Social Research
Many icons depict humans, animals and vehicles in profile views. When designing such icons, it is important to consider how much they have implied motion. Visual implied motion may help communicate certain messages (e.g. “move quickly”), but may also have the potentially undesirable effects of orienting spatial attention in the direction a figure is facing, or priming motor responses in that direction. Implied motion lies at the center of a debate over the International Symbol for Accessibility (ISA), which depicts a person in a wheelchair sitting rigidly upright. In response to arguments that this icon fails to represent “the dynamic mobility of the chair user”, New York State regulations replaced it with the new Accessibility Icon Project (AIP) icon, which depicts a forward-leaning chair user with a newly required “sense of movement” [19 NYCRR § 300.5]. To empirically test *three* senses in which this new AIP icon may evoke more implied motion than the old ISA, we ran three preregistered experiments. In Experiment 1, disabled and non-disabled people were much more likely to describe the new icon in terms of motion. In Experiment 2, response time in a speeded keypress task revealed that the new icon is more strongly associated with motor responses in the direction it is facing. In Experiment 3, even when subjects performed an independent speeded subway sign discrimination task, the facing directions of task-irrelevant icons automatically oriented spatial attention–with the new icon, compared to the old icon, more strongly speeding response times to targets in the direction it was facing. Together, these findings suggest that the Accessible Icon Project icon indeed evokes a stronger sense of motion, but that this may also have several unexpected consequences for how the symbol is automatically perceived, interpreted, and acted upon–in both everyday navigation and in emergency situations.
Talk 2, 1:30 pm
Behind the Bar: Stars, Means, and the Multiple Faces of the Bar-Tip Limit Error
Sarah Kerns1,2 (), Jeremy Wilmer2; 1Dartmouth College, 2Wellesley College
Bar graphs of means are widely used in popular and scientific contexts to convey quantitative information, yet they are often misinterpreted, posing a major challenge to accurate data communication. Our prior work introduced the Draw-Data task, in which participants created interpretive “readouts” by viewing, sketching, and annotating four bar graphs of means, adding 20 data points per bar that could have produced the depicted average.¹ Using this task, we identified a common misinterpretation—the Bar-Tip Limit error (BTLE)—in which values are assumed to fall within the bar rather than around its tip. The present study uses a modified Draw-Data task in which participants explicitly indicated where they believed each mean was located by adding one star per mean. This approach both replicates the BTLE and probes the cognitive mechanisms underlying conceptions of the mean. After exclusions, 186 completed four-graph readouts yielded 744 drawings for analysis. The BTLE occurred in 220 drawings (29.6%), and star placement revealed three categorically distinct response types: graph-prioritizing, data-prioritizing, and conceptual confusion. Of the BTLE drawings, 84 (38.2%) prioritized the graph by placing stars at the bar-tip, reflecting superficial knowledge of graphing conventions but misunderstanding of how means relate to data. In contrast, 80 drawings (36.3%) prioritized the data by placing stars near the mean of the drawn values, indicating accurate mean understanding but confusion between mean and count bar graphs. The remaining 56 drawings (25.5%) placed stars elsewhere, reflecting more fundamental misunderstandings of graphs, means, and data. Together, these findings show that the BTLE does not stem from a single misconception but from multiple, distinct failures to integrate graphing conventions with statistical meaning, underscoring the need for instructional and publication practices that better support this integration.
Talk 3, 1:45 pm
Adverse Effects of Individual Bias in Interpreting Correlations
Zainab Haseeb1,2 (), Keisuke Fukuda1,2; 1University of Toronto, 2University of Toronto, Mississauga
Scatterplots are widely used to communicate a relationship between two variables (e.g., more frequent breakfasts are associated with higher GPA), yet users often interpret the same visualization differently due to their pre-existing beliefs about the specific relationship being visualized. Do belief-based biases influence how visual information is sampled from scatterplots, and can we infer users’ biases from their sampling behaviour during data exploration? To test this, participants evaluated a statement describing a bivariate relationship (X: number of breakfasts eaten; Y: GPA) and rated how strongly they believed the statement (belief strength) and predicted the expected correlation. Participants then explored a scatterplot representing this relationship using a limited aperture, such that only a limited region of the plot was visible at the cursor location. Participants explored the plot for 10 seconds before reporting their perceived correlation. The mouse trajectory was recorded to examine information sampling behavioru. We found that perceived correlation was reliably biased towards individual belief and the magnitude of this belief-based bias was reflected in individual differences in visual sampling behaviour in this aperture-based task. Specifically, mouse trajectories for individuals who over-predicted the correlation showed higher cumulative mouse correlation trajectories than for those who under-predicted the relationship. However, the individual differences in sampling behaviour were not sufficient to account for belief-based biases in the perceived correlations, and previous training with feedback on accurate perception of scatter plots and visualized correlations is, itself, uncorrelated with the outcome of this study. Together, these findings suggest that visual sampling behavior provides informative cues about participants pre-existing beliefs during data exploration, and that the influence of belief extends beyond how visual information is sampled, shaping interpretation even when objective evidence is held constant.
Acknowledgements: This work was supported by NSERC-RGPIN-2024-05727
Talk 4, 2:00 pm
Graph miscommunication: How averages distort understanding, and individual values clarify it
Jeremy Wilmer1 (), Sarah Kerns1,2, Yang Wang3, Tugral Bek Awrang Zeb4, Ken Nakayama5; 1Wellesley College, 2Dartmouth College, 3UC San Diego, 4UC Irvine, 5UC Berkeley
Graphs of averages are widely used to communicate quantitative evidence, yet they may miscommunicate the structure and meaning of continuous outcome data. We identify a family of highly replicable graph miscommunication phenomena—systematic cases in which standard visual summaries afford interpretations incompatible with the underlying data. These effects reflect multiple, distinct ways in which collapsing distributions into a single summary statistic distorts inference. To measure how viewers interpret graphs, we use methods that probe internal representations rather than surface comprehension. In drawing-based reconstruction tasks, participants sketch the individual data values they believe could plausibly underlie a displayed graph of averages, revealing inferred distributions, overlap, and range. In complementary dragging-to-mean tasks, participants estimate average location by positioning a marker on graphs that either display summary statistics or show individual data points. Together, these methods provide sensitive measures of inferred variability and central tendency. Using these approaches, we document several robust misinterpretations of graphs of averages. These include the Bar-Tip Limit (BTL) Error, in which viewers treat the tip of a mean bar as an upper bound on individual values; the Uniformity Fallacy, in which viewers assume values are evenly spread across the range; the Gap Fallacy, in which overlapping group distributions are inferred to be categorically separated; Average-Blindness, whereby viewers mislocate explicitly plotted averages even when they are visually marked; and Average-Induced Polarization, whereby salient means dichotomize evaluative judgments. All effects are large, consistent across tasks, and replicate across independent samples. These failures arise specifically when continuous outcomes are represented by summary statistics alone. In contrast, graphs that display individual values support accurate understanding, inference, and decision-making. Together, these findings show that graphs of averages can actively distort understanding, whereas graphs of individual values support more accurate interpretation and judgment. Visualization choices thus directly determine how continuous evidence is understood.
Acknowledgements: This work was supported in part by the National Science Foundation (Grant No. 1837731) and by the Brachman Hoffman Fund.
Talk 5, 2:15 pm
VisualEyes: A Browser-Based Tool for Annotating and Visualizing Eye-Tracking alongside Multimodal Data Streams
Alexander Nguyen1 (), Shreshth Saxena1, Lauren Fink1; 1McMaster University
Eye movements provide a direct window into visual behaviour, and have been studied for decades using a range of eye-tracking technologies. However, despite widespread adoption of eye-tracking across vision science and related fields, there remains little consensus on standardized approaches to analyzing and visualizing eye-tracking data. Many researchers rely on monetized and proprietary platforms from eye-tracker manufacturers (e.g., Eyelink DataViewer; Pupil Cloud). However, such reliance limits extensibility, interoperability, and integration with additional data streams. We present a unified, open-source visualization platform designed to provide maximum privacy for participants and control for the end-user(s). The platform supports data from heterogeneous eye-tracking systems, including mobile, desktop, and webcam-based trackers, and accommodates varying spatiotemporal resolutions. It can synchronize auxiliary data streams, including audiovisual stimuli, physiological signals, and behavioural measures, facilitating multimodal analyses of visual behaviour in context. Interactive timeline-based annotation allows researchers to align multiple participants' eye data side-by-side with stimulus events of interest and to add an arbitrary number of event or interval annotations, in a shared temporal reference frame. Beyond static visualization, VisualEyes enables live, interactive manipulation of visualization parameters, via frame-wise or timecode-based traversal. Such possibilities are essential for exploring data quality and consistency across all participants and measures in a dataset. Encoded events and rendered visualizations can be exported to interoperable file (container) formats, like .csv and .mp4 respectively. VisualEyes is designed to support collaborative workflows and flexible deployment (via containerization). It preserves participant privacy by enabling self-hosting and allows for multiple users to interact with the same dataset concurrently. Conversely, when ethical approvals or anonymization are in place, VisualEyes can be hosted publicly for demonstrations and sharing (e.g., alongside an academic publication). Ultimately, by replacing closed manufacturer workflows with an open, privacy-first, customizable alternative, VisualEyes aims to become a shared foundation for interoperable, multimodal visualization of eye-tracking data.



