Content deleted Content added
→Reconstruction: minor ce |
|||
Line 2:
==Introduction==
Before the advent of digital image sensors, photography, for the most part of its history, used film to record light information. At the heart of every photographic film are a large number of light-sensitive grains of [[silver-halide]] crystals.<ref name="filmphotography">T. H. James, The Theory of The Photographic Process, 4th ed., New York: Macmillan Publishing Co., Inc., 1977.</ref> During exposure, each micron-sized grain has a binary fate: Either it is struck by some incident photons and becomes "exposed", or it is missed by the photon bombardment and remains "unexposed". In the subsequent film development process, exposed grains, due to their altered chemical properties, are converted to silver metal, contributing to opaque spots on the film; unexposed grains are washed away in a chemical bath, leaving behind
The oversampled binary image sensor is reminiscent of photographic film. Each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity. At the start of the exposure period, all pixels are set to 0. A pixel is then set to 1 if the number of photons reaching it during the exposure is at least equal to a given threshold ''q''. One way to build such binary sensors is to modify standard memory chip technology, where each memory bit cell is designed to be sensitive to visible light.<ref name="RAMsensor">S. A. Ciarcia, A 64K-bit dynamic RAM chip is the visual sensor in this digital image camera, ''Byte Magazine'', pp.21-31, Sep. 1983.</ref> With current CMOS technology, the level of integration of such systems can exceed 10<sup>9</sup>~10<sup>10</sup> (i.e., 1 giga to 10 giga) pixels per chip. In this case, the corresponding pixel sizes (around 50~nm <ref name="DRAM">Y. K. Park, S. H. Lee, J. W. Lee et al., Fully integrated 56nm DRAM technology for 1Gb DRAM, in ''IEEE Symposium on VLSI Technology'', Kyoto, Japan, Jun. 2007.</ref>) are far below the diffraction limit of light, and thus the image sensor is ''[[oversampling]]'' the optical resolution of the light field. Intuitively, one can exploit this spatial redundancy to compensate for the information loss due to one-bit quantizations, as is classic in oversampling [[delta-sigma]] conversions.<ref name="ODSC">J. C. Candy and G. C. Temes, Oversamling Delta-Sigma Data Converters-Theory, Design and Simulation. New York, NY: IEEE Press, 1992.</ref>
|