Content deleted Content added
m link improvement |
|||
Line 5:
Before the advent of digital image sensors, photography, for the most part of its history, used film to record light information. At the heart of every photographic film are a large number of light-sensitive grains of [[silver-halide]] crystals.<ref name="filmphotography">T. H. James, The Theory of The Photographic Process, 4th ed., New York: Macmillan Publishing Co., Inc., 1977.</ref> During exposure, each micron-sized grain has a binary fate: Either it is struck by some incident photons and becomes "exposed", or it is missed by the photon bombardment and remains "unexposed". In the subsequent film development process, exposed grains, due to their altered chemical properties, are converted to silver metal, contributing to opaque spots on the film; unexposed grains are washed away in a chemical bath, leaving behind the transparent regions on the film. Thus, in essence, photographic film is a binary imaging medium, using local densities of opaque silver grains to encode the original light intensity information. Thanks to the small size and large number of these grains, one hardly notices this quantized nature of film when viewing it at a distance, observing only a continuous gray tone.
The oversampled binary image sensor is reminiscent of photographic film. Each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity. At the start of the exposure period, all pixels are set to 0. A pixel is then set to 1 if the number of photons reaching it during the exposure is at least equal to a given threshold ''q''. One way to build such binary sensors is to modify standard memory chip technology, where each memory bit cell is designed to be sensitive to visible light.<ref name="RAMsensor">S. A. Ciarcia, A 64K-bit dynamic RAM chip is the visual sensor in this digital image camera, ''Byte Magazine'', pp.21-31, Sep. 1983.</ref> With current CMOS technology, the level of integration of such systems can exceed 10<sup>9</sup>~10<sup>10</sup> (i.e., 1 giga to 10 giga) pixels per chip. In this case, the corresponding pixel sizes (around 50~nm <ref name="DRAM">Y. K. Park, S. H. Lee, J. W. Lee et al., Fully integrated 56nm DRAM technology for 1Gb DRAM, in ''IEEE Symposium on VLSI Technology'', Kyoto, Japan, Jun. 2007.</ref>) are far below the diffraction limit of light, and thus the image sensor is ''[[oversampling]]'' the optical resolution of the light field. Intuitively, one can exploit this spatial redundancy to compensate for the information loss due to one-bit quantizations, as is classic in oversampling [[delta-sigma conversion]]
Building a binary sensor that emulates the photographic film process was first envisioned by [[Eric Fossum|Fossum]],<ref name="DFS">[[Eric Fossum|E. R. Fossum]], What to do with sub-diffraction-limit (SDL) pixels? - A proposal for a gigapixel digital film sensor (DFS), in ''IEEE Workshop on Charge-Coupled Devices and Advanced Image Sensors'', Nagano, Japan, Jun. 2005, pp.214-217.</ref> who coined the name ''digital film sensor'' (now referred to as a ''quanta image sensor''<ref>E.R. Fossum, J. Ma, S. Masoodian, L. Anzagira, and R. Zizza, ''The quanta image sensor: every photon counts'', MDPI Sensors, vol. 16, no. 8, 1260; August 2016. {{doi|10.3390/s16081260}} (Special Issue on Photon-Counting Image Sensors)</ref>). The original motivation was mainly out of technical necessity. The [[miniaturization]] of camera systems calls for the continuous shrinking of pixel sizes. At a certain point, however, the limited full-well capacity (i.e., the maximum photon-electrons a pixel can hold) of small pixels becomes a bottleneck, yielding very low [[signal-to-noise ratio]]s (SNRs) and poor [[dynamic range]]s. In contrast, a binary sensor whose pixels need to detect only a few photon-electrons around a small threshold ''q'' has much less requirement for full-well capacities, allowing pixel sizes to shrink further.
|