Optical Image Processing Using Light Modulation Displays
We propose to enhance the capabilities of the human visual system by performing optical image processing directly on an observed scene. Unlike previous work which additively superimposes imagery on a scene, or completely replaces scene imagery with a manipulated version, we perform all manipulation through the use of a light modulation display to spatially ﬁlter incoming light. We demonstrate a number of perceptually-motivated algorithms including contrast enhancement and reduction, object highlighting for preattentive emphasis, color saturation, de-saturation, and de-metamerization, as well as visual enhancement for the color blind. A camera observing the scene guides the algorithms for on-the-ﬂy processing, enabling dynamic application scenarios such as monocular scopes, eyeglasses, and windshields. The human visual system (HVS) is a remarkable optical device possessing tremendous resolving ability, dynamic range, and adaptivity. The HVS also performs an impressive amount of processing in early (preattentive) stages to identify salient features and objects. However, the HVS also has some properties that limit its performance under certain conditions. For example, veiling glare due to extremely high contrast can dangerously limit object detection in situations such as driving at night or driving into direct sunlight. On the other hand, conditions such as fog or haze can reduce contrast to a point that signiﬁcantly limits visibility. The tristimulus nature of human color perception also limits our ability to resolve spectral distributions, so that quite different spectra may be perceived as the same color (metamers). Any form of color blindness exacerbates the problem. We propose to enhance the power of the human visual system by applying on-the-ﬂy optical image processing using a spatial light modulation display. To this end, we introduce the concept of see-through optical processing for image enhancement (SOPhIE) by means of a transparent display that modulates the color and intensity of a real-world observation. The modulation patterns are determined dynamically by processing a video stream from a camera observing the same scene. Our approach resembles and builds on work in computational photography and computer vision (see Section 2), but we target a human observer rather than a camera Figure 1: A conceptual illustration of our approach. A light modulation display locally ﬁlters a real-world scene to enhance the visual performance of a human observer, in this case by reducing the contrast of the sun and boosting the saturation of the trafﬁc sign for a driver. Our approach would apply to see-through scenarios such as car windshields and eye glasses as depicted, as well as to binoculars, visors, and similar devices.