Content deleted Content added
→References: needs cats |
m →Working principle: mark free doi |
||
(42 intermediate revisions by 35 users not shown) | |||
Line 1:
{{short description|none}}
An '''oversampled binary image sensor''' is
==Working principle==
Before the advent of digital image sensors, photography, for the most part of its history, used film to record light information. At the heart of every photographic film are a large number of light-sensitive grains of [[silver-halide]] crystals.<ref name="filmphotography">T. H. James, The Theory of The Photographic Process, 4th ed., New York: Macmillan Publishing Co., Inc., 1977.</ref>
The oversampled binary image sensor is reminiscent of photographic film. Each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity. At the start of the exposure period, all pixels are set to 0. A pixel is then set to 1 if the number of photons reaching it during the exposure is at least equal to a given threshold ''q''. One way to build such binary sensors is to modify standard memory chip technology, where each memory bit cell is designed to be sensitive to visible light.<ref name="RAMsensor">S. A. Ciarcia, A 64K-bit dynamic RAM chip is the visual sensor in this digital image camera, ''Byte Magazine'', pp.21-31, Sep. 1983.</ref>
Building a binary sensor that emulates the photographic film process was first envisioned by [[Eric Fossum|Fossum]],<ref name="DFS">[[Eric Fossum|E. R. Fossum]], What to do with sub-diffraction-limit (SDL) pixels? - A proposal for a gigapixel digital film sensor (DFS), in ''IEEE Workshop on Charge-Coupled Devices and Advanced Image Sensors'',
==Imaging model==
===
[[File:Oversampled binary sensor imaging model.jpg|thumb|right|250px|Fig.1 The imaging model. The simplified architecture of a diffraction-limited imaging system. Incident light field <math>\lambda_0(x)</math> passes through an optical lens, which acts like a linear system with a diffraction-limited point spread function (PSF). The result is a smoothed light field <math>\lambda(x)</math>, which is subsequently captured by the image sensor.]]
Consider a simplified camera model shown in Fig.1. The <math>\lambda_0(x)</math> is the incoming light intensity field. By assuming that light intensities remain constant within a short exposure period, the field can be modeled as only a function of the spatial variable <math>x</math>. After passing through the optical system, the original light field <math>\lambda_0(x)</math> gets filtered by the lens, which acts like a linear system with a given [[impulse response]]. Due to imperfections (e.g., aberrations) in the lens, the impulse response, a.k.a. the [[point spread function]] (PSF) of the optical system, cannot be a Dirac delta, thus, imposing a limit on the resolution of the observable light field. However, a more fundamental physical limit is due to light [[diffraction]].<ref name="Optics">M. Born and E. Wolf, ''[[Principles of Optics]]'', 7th ed. Cambridge: Cambridge University Press, 1999</ref>
:<math>R_a = 1.22 \, w f,</math>
Line 17 ⟶ 19:
where <math>w</math> is the [[wavelength]] of the light and <math>f</math> is the [[F-number]] of the optical system. Due to the [[lowpass]] (smoothing) nature of the PSF, the resulting <math>\lambda(x)</math> has a finite spatial-resolution, i.e., it has a finite number of [[Degrees of freedom (physics and chemistry)|degrees of freedom]] per unit space.
===
[[File:binary sensor model.
Fig.2 illustrates the binary sensor model. The <math>s_m</math> denote the exposure values accumulated by the sensor pixels. Depending on the local values of <math>s_m</math>, each pixel (depicted as "buckets" in the figure) collects a different number of photons hitting on its surface. <math>y_m</math> is the number of photons impinging on the surface of the <math>m</math>th pixel during an [[
As a [[photosensitive]] device, each pixel in the image sensor converts photons to electrical signals, whose amplitude is proportional to the number of photons impinging on that pixel. In a conventional sensor design, the analog electrical signals are then quantized by an [[Analog-to-digital converter|A/D converter]] into 8 to 14 bits (usually the more bits the better). But in the binary sensor, the quantizer is 1 bit. In Fig.2, <math>b_m</math> is the quantized output of the <math>m</math>th pixel. Since the photon counts <math>y_m</math> are drawn from random variables, so are the binary sensor output <math>b_m</math>.
===Spatial and temporal oversampling===
If it is allowed to have temporal oversampling, i.e., taking multiple consecutive and independent frames without changing the total exposure time <math>\tau</math>, the performance of the binary sensor is equivalent to the sensor with same number of spatial oversampling under certain condition.<ref name="bitsfromphotons" />
==Advantages over traditional sensors==
Due to the limited full-well capacity of conventional image pixel, the pixel will saturate when the light intensity is too strong. This is the reason that the dynamic range of the pixel is low. For the oversampled binary image sensor, the dynamic range is not defined for a single pixel, but a group of pixels, which makes the dynamic range high.<ref name="bitsfromphotons" />
==Reconstruction==
[[File:SPAD EPFL BINARY IMAGES.
One of the most important
== References ==
{{Reflist}}
[[Category:Digital photography]]
[[Category:Image sensors]]
[[Category:Image processing]]
[[Category:Digital signal processing]]
[[Category:Digital electronics]]
|