AVS 2026 Secondary Banner

Talk Session, Wednesday, May 20, 5:00 – 6:00 pm

Color & Brightness Perception

Talk 1, 5:00 pm

Photographing Moonlight

Jeff Mulligan1; 1Freelance Vision Scientist

Modern digital cameras and their associated software are capable of producing sharp, detailed colorful images under low illumination, such as moonlight. However, a photograph taken under the full moon can sometimes appear to have been shot under the noonday sun – not necessarily what the photographer intended. This presentation explores how basic vision science principles can be applied to transform these images to reproduce the perceptual experience of viewing a scene in scotopic or mesopic conditions. Scotopic vision is perhaps the simplest, as the saturation of all colors can be set to zero. But in order to reflect the scotopic spectral sensitivity (V-lambda-prime), the lightnesses must be altered in a color-dependent way. This is an ill-posed problem due to the fact that the full spectral content of the scene is lost in the capture process, but multiple approximate solutions exist, ranging from a simple reweighting of R, G and B pixel components, to a full spectral model. We also propose an extension to the standard opponent color coding model that incorporates rod signals and makes predictions concerning the saturations of different colors in mesopic conditions. The principles can be illustrated with simple examples, but empirical work may be required to determine the parameters needed to obtain an exact match to human perception.

Talk 2, 5:15 pm

AR autobrightness as a vision science problem

Thomas Yerxa1 (), James Hillis1, Lili Zhang1, Takahiro Doi1, Xueyan Niu1,2, Charlie Burlingham1, Jasmine Jan1, Romain Bachy1, Kavitha Ratnam1; 1Meta Reality Labs, 2New York University, Center for Neural Science

In traditional mobile displays, such as smartphones, autobrightness maintains image visibility by adapting the display according to illuminance measured by an ambient light sensor (ALS). While performance varies, this simple global-illuminance based strategy is evidently sufficient for autobrightness to have become ubiquitous across opaque mobile displays. Unlike traditional displays, see-through displays are additive, meaning light from a displayed image is added to a highly-variable and evolving light field. Displaying an image that is consistently visible, let alone perceptually stable, under these conditions is a challenge. A priori, it seems a gross measure of illuminance would not provide sufficient information to adapt the display optimally. Autobrightness in see-through augmented reality (AR) display glasses offers a naturalistic challenge to test fundamental theory in spatial vision and object perception. For example, optimizing such systems tests how structured, 3-dimensional backgrounds behind AR content give rise to masking of visual features spanning multiple levels of abstraction. It also tests how relative motion, binocular fusion, and distance-dependent ocular defocus can help break that masking to enable clear perception of the AR object. Optimizing display control for perceptual quality requires knowledge of the joint distribution of environments and content experienced by users and their preferences/behaviors. We now have large-scale egocentric video datasets (Niu et al. 2025), but still lack data characterizing preferences. However, a large and growing set of scientifically grounded models of visual encoding (notably, foundation models/neural networks) are increasingly capable of simulating human perception, potentially allowing us to overcome this difficulty. Therefore there is a need for experiments to rank these models according to their ability to predict human preferences. We will review early findings, including the performance of simple global illumination based models in smart glasses, and estimates of the value of information encoded by multiple layers of foundation models for display control.

Acknowledgements: This work was funded by Meta Reality Labs

Talk 3, 5:30 pm

Comprehensive Color-Discrimination Threshold Measurements Provide a Benchmark for CIE Color-Difference Formulas

Fangfang Hong1 (), Jason Chow2, Phillip Guan2, Alex H. Williams3, David H. Brainard1; 1Department of Psychology, University of Pennsylvania, 2Reality Lab Research, Meta, 3Center of Neural Science, New York University

The geometry of human color space underlies practical applications, including color-difference tolerances in manufacturing, device-independent color specification, and image compression. CIELAB color-difference formulas (ΔE76, ΔE94, ΔE00) were developed as perceptual color-distance measures to support these applications. Although these formulas were designed for supra-threshold color differences, threshold measurements provide a useful limiting-case benchmark. Obtaining such measurements has been difficult because a dense characterization of discrimination thresholds throughout color space would require millions of trials per observer using classical psychophysical methods. To overcome this challenge, we combined adaptive trial placement with a semi-parametric Wishart Process Psychophysical Model (WPPM), enabling us to measure the full three-dimensional psychophysical color discrimination field for a single observer using only ~30,000 trials. On each trial, the observer viewed three blobby stimuli. The color of one stimulus differed from the other two, and the observer indicated the odd one out. When fit to these data, the WPPM characterizes discrimination thresholds in all directions around any reference stimulus. The measured thresholds can be transformed and expressed in any color space. Using this approach, we computed thresholds in ΔE00 units along the L*, a*, and b* directions for 75 reference stimuli distributed across our display’s RGB gamut. If ΔE00 accurately describes perceptual distances, thresholds expressed in ΔE00 would be constant. They were not. Thresholds along L* expressed in ΔE00 varied by 91% of their mean (range 1.18 – 2.82). Along a* the corresponding variation was 86% (1.02 – 2.38) and along b* 71% (1.22 – 2.42). Similar performance was observed for ΔE94, and worse performance for ΔE76. The data provide guidance on how color-difference formulas can be improved. For example, luminance thresholds are not independent of reference chromaticity, as assumed in the structure of the CIE formulas.

Acknowledgements: Meta

Talk 4, 5:45 pm

Physiologically Relevant Color Matching Functions for Display Color Consistency

Yanjun Li1, Yuteng Zhu1, Osborn de Lima1, Francisco Imai1, Teun Baar1, Adria Fores Herranz1, Ray Ptucha1, Will Wu1, Shahram Peyvandi1; 1Apple Inc.

The CIE1931 2° standard observer Color Matching Functions (CMFs) have been widely used in color reproduction. However, spectrally different stimuli with identical CIE1931 tristimulus values produce appearance mismatches for most observers due to well-known flaws in the CIE1931 CMFs, particularly at shorter wavelengths. To achieve accurate color reproduction, we developed physiologically driven CMFs optimized for average observer at certain age. Over 200 observers, aged between 23 to 66 years (mean = 37), performed a color matching task, adjusting the color of a 3° square test stimulus to match a spectrally different reference stimulus. Observers matched seven different colors across multiple displays, forming over 1500 metameric pairs. Matched pairs showed significant differences in their CIE1931 tristimulus values. To derive cone fundamentals that best predict matching data, we minimized distances between matching pairs in cone excitation space by optimizing macula, lens, and cone photopigment optical densities, and peak wavelengths of cone photopigment absorbance spectra, based on CIE 170-1:2006 and Stockman & Rider (2023) models. The CIE2006 model, although overperformed the CIE1931 CMFs, showed systematic differences between predictions and data for narrow-spectrum blue stimuli with different peak wavelengths. Data revealed S-cone response, relative to L- and M-cone responses, is underestimated in the ~460-490nm range. For a given cone absorbance spectra set, pre-retinal filter with higher relative density in ~460-490nm range, e.g. templates by Vos (1972) and van Norren & Vos (1974), better predicted our data across all stimuli. Additionally, our model suggests slightly lower macular density for S-cone compared to L- and M-cones, likely due to coarser S-cone distribution in fovea. Optimized parameters showed higher peak optical density for M-cone than L-cone, consistent with direct measurements, and lowest for S-cone, consistent with shorter S-cone outer segments. The new CMFs derived from this upgraded model significantly reduced appearance mismatch, achieving higher consistency across displays.

Thank You to Our AVS 2026 Sponsors

Apple
Vision: Science to Applications
Centre for Vision Research