Schedule of Events | Search Abstracts | Talk Sessions | Poster Sessions
Poster Session C, Wednesday, May 20, 4:15 – 5:00 pm
Board 24
Reconstruction-based color-enhancement algorithm for dichromats
Callista Dyer1, Eero Simoncelli2,3, David Brainard1; 1Department of Psychology, University of Pennsylvania, 2Center for Neural Science, New York University, 3Center for Computational Neuroscience, Flatiron Institute, Simons Foundation
Visual encoding limits spatial, temporal and spectral resolution, raising the possibility that images can be enhanced by transferring information that is outside of an observer's 'window of visibility' so that it becomes visible to the observer. Here we consider the special case of improving images for viewing by dichromats. Our approach balances increasing the information available to the dichromat against the degree to which the original image is distorted. Preventing too much distortion serves as a proxy for ensuring that the transformed image is interpretable. We use numerical methods to choose a transformation that optimizes a weighted tradeoff between information and distortion. Specific algorithms emerge through choice of metrics that separately quantify information and distortion. We quantified distortion by computing the CIELAB ∆E00 difference between original and transformed images. For information, we ask how well the original image can be reconstructed from the projection of the transformed image onto the dichromat's color space. In this initial work, we reconstructed using linear regression applied to cone contrasts, and quantified information as the negative CIELAB ∆E00 difference between original and reconstructed images. For comparison, we also used the negative squared cone contrast error to quantify information. In both cases, we obtained tradeoff curves showing how distortion increases with increasing information. We compared the two information metrics by visually examining transformed images with equal distortion, with the images rendered as they would appear to dichromats (Brettel et al., 1997). The metrics produce qualitatively different results, with the ∆E00-based metric generally appearing more effective to us. Our approach can be generalized to more sophisticated image reconstruction methods, other types of visual information loss, and other image distortion metrics. An important challenge, however, is to develop methods beyond inspection to evaluate the efficacy of different transformation algorithms.
Acknowledgements: Supported by NIH T32EY007035.



