AVS 2026 Secondary Banner

Talk Session, Wednesday, May 20, 11:00 – 12:00 pm

XR II: Visual Assessment

Talk 1, 11:00 am

Efficient and Reliable Assessment of Contrast Sensitivity in Virtual Reality

Muhammad Abraiz Bin Azhar Khan1 (), Namrata Bangera1, Bas Rokers1; 1NYU Abu Dhabi

The clinical gold standard for measuring contrast sensitivity (CS) is the Pelli-Robson chart. While flexible and well-validated, it suffers from coarse step sizes (0.15 log units), vulnerability to practice effects, and moderate test-retest reliability. These limitations often mask functional deficits in early optic neuropathies. We developed a novel, gamified Virtual Reality (VR) assessment and evaluated efficiency, test-retest reliability, and structure-function associations with Optical Coherence Tomography (OCT)-based retinal measurements. We tested 94 participants (188 eyes) using both Pelli-Robson charts (containing 48 Sloan letters) and our VR-based system (presenting 40 Sloan letters). In VR, we implemented temporal dithering, enabling continuous contrast resolution finer than the fixed steps of the display. VR testing employed staircased CS sampling and psychometric fitting. Test-retest reliability was assessed in a subset of 30 participants who completed all sessions twice. We assessed ganglion cell complex (GCC) thickness using OCT (Optovue Solix) and explored relationships between CS and retinal structure using linear mixed-effects models. VR-based CS estimates showed superior test-retest reliability compared to chart-based assessment, despite requiring fewer participant responses. Bland-Altman limits of agreement were narrower for VR (95% conf. int. ±0.17 logCS) than for chart (±0.24 logCS), indicating reduced measurement variability. Test-retest correlations were also stronger for VR (r = 0.80, p < 0.0001) than for chart (r = 0.49, p < 0.0001). Critically, VR-based CS was predicted by GCC thickness in the nasal macular region (1-3mm annulus, N = 94, marginal R² = 0.1, pFDR = 0.001). The relationship between GCC and chart-based CS did not reach significance in any region. Adaptive VR-based assessment provides efficient and reliable estimates of contrast sensitivity. This increased precision reveals spatially specific structure-function relationships not detectable using standard charts. VR-based measures offer improved sensitivity for linking retinal integrity to visual function, enabling the early detection and accurate tracking of functional changes.

Acknowledgements: NYUAD Center for Brain and Health, funded by Tamkeen under NYU Abu Dhabi Research Institute grant CG012

Talk 2, 11:15 am

Using virtual reality to simulate central and peripheral vision loss during spatial navigation

Sam Beech1 (), Maggie McCracken1, Bobby Bodenheimer2, Jeanine Stefanucci1, Sarah Creem-Regehr1; 1University of Utah, 2Vanderbilt University

Vision provides essential spatial information for navigation; however, experimentally investigating how central and peripheral vision contribute to navigation remains challenging because the simulated visual field loss must follow gaze. Therefore, we developed a virtual reality navigation task with eye-tracking to simulate gaze-contingent visual field loss in healthy participants. This approach allowed us to examine how central and peripheral vision loss affect navigation accuracy and head movement behavior. Experiment 1 simulated central vision loss (20 participants; within-subjects: normal vision vs. 28° central occlusion). Experiment 2 simulated peripheral vision loss (preliminary data for 16 participants; between-subjects: normal vision, 30°, or 20° tunnel vision). Participants walked to three ground-level markers with landmarks displayed in the background. The markers were then removed, and the participants returned to their estimated location of the first marker under three cue conditions: Full Cues, Self-Motion Only (landmarks removed), and Vision Only (directional self-motion cues disrupted by rotation). Spatial error and head movements were recorded. Spatial error was greater with central occlusion (78.5 cm) than with normal vision (66.9 cm; p = .037, d = .240). In contrast, peripheral field reduction did not affect spatial error (normal = 76 cm, 30° = 80 cm, 20° = 73 cm; p = .701). Across both experiments, spatial error increased in single-cue conditions relative to Full Cues (p ≤ .032, d ≥ .369). Central occlusion also produced greater yaw-plane head movements (p = .010, d = .120). Participants with 30° tunnel vision also showed increased yaw-plane movements compared to normal and 20° vision (p ≤ .028, d ≥ .368). Central, but not peripheral, vision loss impaired navigation accuracy. Navigation accuracy also declined as sensory cues were removed. Simulated vision loss increased head movements, except with 20° vision, suggesting a compensatory visual search response that is potentially inhibited by severe peripheral vision loss.

Talk 3, 11:30 am

Virtual reality trajectory analysis of street-crossing navigation in people with ultra-low vision

Dinesh Venugopal1, Batuhan Erkat1, Roksana Sadeghi2,3, Chau Tran4, Will Gee4, Brittnee Livingston5, Gislin Dagnelie2, Arathy Kartha1; 1SUNY College of Optometry, Biological and Vision Sciences, NY, United States, 2Ophthalmology, Johns Hopkins University, Baltimore, MD, United States, 3Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States, 4BaltiVirtual, MD, United States, 5Central Association for the Blind and Visually Impaired, Utica, NY, United States

Virtual reality (VR) headsets provide framewise head-position data that are valuable for characterizing spatial navigation trajectories. However, the extent to which these data can effectively capture navigation behavior remains unclear. In this study, an immersive VR task was used to quantify street-crossing navigation behavior in participants with ultra-low vision (ULV) (n=14) and normal vision (NV) (n=9). Participants completed street-crossing tasks in a virtual street environment across four sequentially presented complexity levels: empty street (A), parked cars with pedestrians (B), yielding moving cars (C), and yielding moving cars with pedestrians (D). At each level, participants stepped off the curb onto the street and walked across to reach the opposite curb (~4.6m away) at a comfortable speed. The NV participants also completed tasks with ULV-simulating filters (sULV; using Bangerter foils). Head-position data were continuously tracked; turns were segmented using the simplified Ramer–Douglas–Peucker algorithm (ε=10cm) and compared to an optimal path to compute trajectory-based metrics, including path efficiency (%), turn frequency and mean path curvature (1/m; an index of veering behavior). Motion onset latency (s), walking speed (m/s) and trajectory-based metrics were compared across participant groups and complexity levels using Generalized Estimating Equations. Results indicate that increasing complexity significantly altered street-crossing behavior in the ULV group. Specifically, highest complexity level (D) showed increased onset latency, reduced walking speed, greater turn frequency, decreased path efficiency, and increased veering behavior compared with levels A and/or B (all p < 0.05); however, no significant differences were observed between levels D and C. A similar pattern was observed in the sULV group. In NV, no significant differences were observed across complexity levels for any metric except onset latency. Overall, this trajectory-based approach provides a rigorous VR framework for evaluating navigation behavior in visually complex environments for individuals with ultra-low vision and for assessing VR-based navigation performance.

Talk 4, 11:45 am

A wearable module to support high temporal and spectral measurement of visual experience

Zachary J Kelly1, Samantha A Montoya1, Vincent Lau1, Alan A Stocker1, Geoffrey K Aguirre1; 1University of Pennsylvania

Characterizing naturalistic, gaze-centered visual input is important not only for the efficient design of AR/VR devices, but also for studying the human visual system broadly. While the PupilLabs Neon spectacles provide a turn-key solution for simultaneous gaze tracking and recording of egocentric video, it is limited in temporal resolution and does not provide a measure of environmental illuminance. We have designed a camera module to complement the Neon system. Our module captures wide-field, high temporal resolution video and a continuous measure of absolute environmental illuminance. Our custom world camera and minispectrometer module clips onto the nose bridge of the PupilLabs Neon glasses, and is controlled by a battery-powered Raspberry Pi 5 enclosed in a small, wearable pack. Our world camera is an IMX219 chip operating at a high-speed (120 FPS), and a wide FOV (137° horizontal, 102° vertical). The light sensor is an AMS AS7341, 11-channel light sensor operating at ~1hz and measuring over a 5 log unit illuminance range. The Raspberry Pi runs custom Python/C++ software supporting bluetooth control of the sensors. Both the Neon glasses and our module are controlled by Android phone apps, one of which we developed for our Raspberry Pi firmware. The system supports 10+ hours of continuous data acquisition. Together, the Neon glasses and our camera module provide a system that can be worn during naturalistic tasks and records high temporal/spatial resolution, egocentric video, along with simultaneous eyetracking and spectral sampling. Existing software allows us to align the video from our world camera module with the built-in world camera on the Neon device. Currently, we are combining the high temporal and spatial video recordings with the measurement of illuminance and eye position, with the goal of estimating the foveated spatio-temporal-spectral input to the eye during different daily activities.

Acknowledgements: This work was supported by NIH grant R01EY036255

Thank You to Our AVS 2026 Sponsors

Apple
Vision: Science to Applications
Centre for Vision Research