Content deleted Content added
GreenC bot (talk | contribs) Move 2 urls. Wayback Medic 2.5 per WP:URLREQ#ieee.org |
Link suggestions feature: 2 links added. Tags: Visual edit Mobile edit Mobile web edit Newcomer task Suggested: add links |
||
Line 13:
===Lens===
[[File:Oversampled binary sensor imaging model.jpg|thumb|right|250px|Fig.1 The imaging model. The simplified architecture of a diffraction-limited imaging system. Incident light field <math>\lambda_0(x)</math> passes through an optical lens, which acts like a linear system with a diffraction-limited point spread function (PSF). The result is a smoothed light field <math>\lambda(x)</math>, which is subsequently captured by the image sensor.]]
Consider a simplified camera model shown in Fig.1. The <math>\lambda_0(x)</math> is the incoming light intensity field. By assuming that light intensities remain constant within a short exposure period, the field can be modeled as only a function of the spatial variable <math>x</math>. After passing through the optical system, the original light field <math>\lambda_0(x)</math> gets filtered by the lens, which acts like a linear system with a given [[impulse response]]. Due to imperfections (e.g., aberrations) in the lens, the impulse response, a.k.a. the [[point spread function]] (PSF) of the optical system, cannot be a Dirac delta, thus, imposing a limit on the resolution of the observable light field. However, a more fundamental physical limit is due to light [[diffraction]].<ref name="Optics">M. Born and E. Wolf, ''[[Principles of Optics]]'', 7th ed. Cambridge: Cambridge University Press, 1999</ref> As a result, even if the lens is ideal, the PSF is still unavoidably a small blurry spot. In optics, such diffraction-limited spot is often called the [[Airy disk]],<ref name="Optics"/> whose radius <math>R_a</math> can be computed as
:<math>R_a = 1.22 \, w f,</math>
Line 22:
[[File:binary sensor model.svg|thumb|right|480px|Fig.2 The model of the binary image sensor. The pixels (shown as "buckets") collect photons, the numbers of which are compared against a quantization threshold ''q''. In the figure, we illustrate the case when ''q'' = 2. The pixel outputs are binary: <math>b_m = 1</math> (i.e., white pixels) if there are at least two photons received by the pixel; otherwise, <math>b_m = 0</math> (i.e., gray pixels).]]
Fig.2 illustrates the binary sensor model. The <math>s_m</math> denote the exposure values accumulated by the sensor pixels. Depending on the local values of <math>s_m</math>, each pixel (depicted as "buckets" in the figure) collects a different number of photons hitting on its surface. <math>y_m</math> is the number of photons impinging on the surface of the <math>m</math>th pixel during an [[exposure (photography)|exposure]] period. The relation between <math>s_m</math> and the photon count <math>y_m</math> is stochastic. More specifically, <math>y_m</math> can be modeled as realizations of a Poisson [[random variable]], whose intensity parameter is equal to <math>s_m</math>,
As a [[photosensitive]] device, each pixel in the image sensor converts photons to electrical signals, whose amplitude is proportional to the number of photons impinging on that pixel. In a conventional sensor design, the analog electrical signals are then quantized by an [[Analog-to-digital converter|A/D converter]] into 8 to 14 bits (usually the more bits the better). But in the binary sensor, the quantizer is 1 bit. In Fig.2, <math>b_m</math> is the quantized output of the <math>m</math>th pixel. Since the photon counts <math>y_m</math> are drawn from random variables, so are the binary sensor output <math>b_m</math>.
|