Content deleted Content added
GreenC bot (talk | contribs) Move 2 urls. Wayback Medic 2.5 per WP:URLREQ#ieee.org |
m →Working principle: mark free doi |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 7:
The oversampled binary image sensor is reminiscent of photographic film. Each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity. At the start of the exposure period, all pixels are set to 0. A pixel is then set to 1 if the number of photons reaching it during the exposure is at least equal to a given threshold ''q''. One way to build such binary sensors is to modify standard memory chip technology, where each memory bit cell is designed to be sensitive to visible light.<ref name="RAMsensor">S. A. Ciarcia, A 64K-bit dynamic RAM chip is the visual sensor in this digital image camera, ''Byte Magazine'', pp.21-31, Sep. 1983.</ref> With current CMOS technology, the level of integration of such systems can exceed 10<sup>9</sup>~10<sup>10</sup> (i.e., 1 giga to 10 giga) pixels per chip. In this case, the corresponding pixel sizes (around 50~nm <ref name="DRAM">Y. K. Park, S. H. Lee, J. W. Lee et al., Fully integrated 56nm DRAM technology for 1Gb DRAM, in ''IEEE Symposium on VLSI Technology'', Kyoto, Japan, Jun. 2007.</ref>) are far below the diffraction limit of light, and thus the image sensor is ''[[oversampling]]'' the optical resolution of the light field. Intuitively, one can exploit this spatial redundancy to compensate for the information loss due to one-bit quantizations, as is classic in oversampling [[delta-sigma converter]]s.<ref name="ODSC">J. C. Candy and G. C. Temes, Oversamling Delta-Sigma Data Converters-Theory, Design and Simulation. New York, NY: IEEE Press, 1992.</ref>
Building a binary sensor that emulates the photographic film process was first envisioned by [[Eric Fossum|Fossum]],<ref name="DFS">[[Eric Fossum|E. R. Fossum]], What to do with sub-diffraction-limit (SDL) pixels? - A proposal for a gigapixel digital film sensor (DFS), in ''IEEE Workshop on Charge-Coupled Devices and Advanced Image Sensors'', Nagano, Japan, Jun. 2005, pp.214-217.</ref> who coined the name ''digital film sensor'' (now referred to as a ''quanta image sensor''<ref>E.R. Fossum, J. Ma, S. Masoodian, L. Anzagira, and R. Zizza, ''The quanta image sensor: every photon counts'', MDPI Sensors, vol. 16, no. 8, 1260; August 2016. {{doi|10.3390/s16081260|doi-access=free}} (Special Issue on Photon-Counting Image Sensors)</ref>). The original motivation was mainly out of technical necessity. The [[miniaturization]] of camera systems calls for the continuous shrinking of pixel sizes. At a certain point, however, the limited [[full-well capacity]] (i.e., the maximum photon-electrons a pixel can hold) of small pixels becomes a bottleneck, yielding very low [[signal-to-noise ratio]]s (SNRs) and poor [[dynamic range]]s. In contrast, a binary sensor whose pixels need to detect only a few photon-electrons around a small threshold ''q'' has much less requirement for full-well capacities, allowing pixel sizes to shrink further.
==Imaging model==
Line 13:
===Lens===
[[File:Oversampled binary sensor imaging model.jpg|thumb|right|250px|Fig.1 The imaging model. The simplified architecture of a diffraction-limited imaging system. Incident light field <math>\lambda_0(x)</math> passes through an optical lens, which acts like a linear system with a diffraction-limited point spread function (PSF). The result is a smoothed light field <math>\lambda(x)</math>, which is subsequently captured by the image sensor.]]
Consider a simplified camera model shown in Fig.1. The <math>\lambda_0(x)</math> is the incoming light intensity field. By assuming that light intensities remain constant within a short exposure period, the field can be modeled as only a function of the spatial variable <math>x</math>. After passing through the optical system, the original light field <math>\lambda_0(x)</math> gets filtered by the lens, which acts like a linear system with a given [[impulse response]]. Due to imperfections (e.g., aberrations) in the lens, the impulse response, a.k.a. the [[point spread function]] (PSF) of the optical system, cannot be a Dirac delta, thus, imposing a limit on the resolution of the observable light field. However, a more fundamental physical limit is due to light [[diffraction]].<ref name="Optics">M. Born and E. Wolf, ''[[Principles of Optics]]'', 7th ed. Cambridge: Cambridge University Press, 1999</ref> As a result, even if the lens is ideal, the PSF is still unavoidably a small blurry spot. In optics, such diffraction-limited spot is often called the [[Airy disk]],<ref name="Optics"/> whose radius <math>R_a</math> can be computed as
:<math>R_a = 1.22 \, w f,</math>
Line 22:
[[File:binary sensor model.svg|thumb|right|480px|Fig.2 The model of the binary image sensor. The pixels (shown as "buckets") collect photons, the numbers of which are compared against a quantization threshold ''q''. In the figure, we illustrate the case when ''q'' = 2. The pixel outputs are binary: <math>b_m = 1</math> (i.e., white pixels) if there are at least two photons received by the pixel; otherwise, <math>b_m = 0</math> (i.e., gray pixels).]]
Fig.2 illustrates the binary sensor model. The <math>s_m</math> denote the exposure values accumulated by the sensor pixels. Depending on the local values of <math>s_m</math>, each pixel (depicted as "buckets" in the figure) collects a different number of photons hitting on its surface. <math>y_m</math> is the number of photons impinging on the surface of the <math>m</math>th pixel during an [[exposure (photography)|exposure]] period. The relation between <math>s_m</math> and the photon count <math>y_m</math> is stochastic. More specifically, <math>y_m</math> can be modeled as realizations of a Poisson [[random variable]], whose intensity parameter is equal to <math>s_m</math>,
As a [[photosensitive]] device, each pixel in the image sensor converts photons to electrical signals, whose amplitude is proportional to the number of photons impinging on that pixel. In a conventional sensor design, the analog electrical signals are then quantized by an [[Analog-to-digital converter|A/D converter]] into 8 to 14 bits (usually the more bits the better). But in the binary sensor, the quantizer is 1 bit. In Fig.2, <math>b_m</math> is the quantized output of the <math>m</math>th pixel. Since the photon counts <math>y_m</math> are drawn from random variables, so are the binary sensor output <math>b_m</math>.
Line 33:
==Reconstruction==
[[File:SPAD EPFL BINARY IMAGES.png|thumb|right|268px|Fig.4 Reconstructing an image from the binary measurements taken by a SPAD<ref name="SPADS">L. Carrara, C. Niclass, N. Scheidegger, H. Shea, and E. Charbon, A gamma, X-ray and high energy proton radiation-tolerant CMOS image sensor for space applications, in ''IEEE International Solid-State Circuits Conference'', Feb. 2009, pp.40-41.</ref> sensor, with a spatial resolution of 32×32 pixels. The final image (lower-right corner) is obtained by incorporating 4096 consecutive frames, 11 of which are shown in the figure.]]
One of the most important challenges with the use of an oversampled binary image sensor is the reconstruction of the light intensity <math>\lambda(x)</math> from the binary measurement <math>b_m</math>. [[Maximum likelihood|Maximum likelihood estimation]] can be used for solving this problem.<ref name="bitsfromphotons" /> Fig. 4 shows the results of reconstructing the light intensity from 4096 binary images taken by [[single photon avalanche diode]]s (SPADs) camera.<ref name="SPADS" /> A better reconstruction quality with fewer temporal measurements and faster, hardware friendly implementation, can be achieved by more sophisticated algorithms.<ref>{{Cite journal|title = Image reconstruction from dense binary pixels|arxiv = 1512.01774|journal = Signal Processing with Adaptive Sparse Structured Representations (SPARS 2015)|date = 2015-12-06|first = Or|last = Litany|first2 = Tal|last2 = Remez|first3 = Alex|last3 = Bronstein|bibcode = 2015arXiv151201774L}}</ref>
|