Quantization (signal processing): Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: hdl, s2cid, url, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Abductive | Category:Use American English from April 2019 | #UCB_Category 813/940
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(28 intermediate revisions by 18 users not shown)
Line 2:
{{Use American English|date=April 2019}}
 
[[File:Quantization error.png|thumb|500pxupright=2|The simplest way to quantize a signal is to choose the digital amplitude value closest to the original analog amplitude. This example shows the original analog signal (green), the quantized signal (black dots), the [[Signal reconstruction|signal reconstructed]] from the quantized signal (yellow) and the difference between the original signal and the reconstructed signal (red). The difference between the original signal and the reconstructed signal is the quantization error and, in this simple quantization scheme, is a deterministic function of the input signal.]]
 
'''Quantization''', in mathematics and [[digital signal processing]], is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite [[Cardinality|number of elements]]. [[Rounding]] and [[truncation]] are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all [[lossy compression]] algorithms.
 
The difference between an input value and its quantized value (such as [[round-off error]]) is referred to as '''quantization error''', '''noise''' or '''distortion'''. A device or [[algorithm function|algorithmic function]] that performs quantization is called a '''quantizer'''. An [[analog-to-digital converter]] is an example of a quantizer.
 
==Example==
Line 15:
where the notation <math> \lfloor \ \rfloor </math> denotes the [[floor function]].
 
Alternatively, the same quantizer may be expressed in terms of the [[ceiling function]], as
The essential property of a quantizer is having a countable-set of possible output-values members smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size <math>\Delta</math> is equal to 1. With <math>\Delta = 1</math> or with <math>\Delta</math> equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs.
:<math>Q(x) = \Delta \cdot \left\lceil \frac{x}{\Delta} - \frac{1}{2} \right\rceil</math>.
 
(The notation <math> \lceil \ \rceil </math> denotes the ceiling function).
When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the [[mean squared error]] produced by such a rounding operation will be approximately <math>\Delta^2/ 12</math>.<ref name=Sheppard>{{cite journal | last=Sheppard | first=W. F. |author-link=William Fleetwood Sheppard| title=On the Calculation of the most Probable Values of Frequency-Constants, for Data arranged according to Equidistant Division of a Scale | journal=Proceedings of the London Mathematical Society | publisher=Wiley | volume=s1-29 | issue=1 | year=1897 | issn=0024-6115 | doi=10.1112/plms/s1-29.1.353 | pages=353–380| url=https://zenodo.org/record/1447738 }}</ref><ref name=Bennett>W. R. Bennett, "[http://www.alcatel-lucent.com/bstj/vol27-1948/articles/bstj27-3-446.pdf Spectra of Quantized Signals]", ''[[Bell System Technical Journal]]'', Vol. 27, pp. 446–472, July 1948.</ref><ref name=OliverPierceShannon>{{cite journal | last1=Oliver | first1=B.M. | last2=Pierce | first2=J.R. | last3=Shannon | first3=C.E. |author-link3=Claude Shannon| title=The Philosophy of PCM | journal=Proceedings of the IRE | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=36 | issue=11 | year=1948 | issn=0096-8390 | doi=10.1109/jrproc.1948.231941 | pages=1324–1331| s2cid=51663786 }}</ref><ref name=Stein>Seymour Stein and J. Jay Jones, ''[https://books.google.com/books/about/Modern_communication_principles.html?id=jBc3AQAAIAAJ Modern Communication Principles]'', [[McGraw–Hill]], {{ISBN|978-0-07-061003-3}}, 1967 (p. 196).</ref><ref name=GishPierce>{{cite journal | last1=Gish | first1=H. | last2=Pierce | first2=J. | title=Asymptotically efficient quantizing | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=14 | issue=5 | year=1968 | issn=0018-9448 | doi=10.1109/tit.1968.1054193 | pages=676–683}}</ref><ref name=GrayNeuhoff>{{cite journal | last1=Gray | first1=R.M. |author-link=Robert M. Gray| last2=Neuhoff | first2=D.L. | title=Quantization | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=44 | issue=6 | year=1998 | issn=0018-9448 | doi=10.1109/18.720541 | pages=2325–2383| s2cid=212653679 }}</ref> Mean squared error is also called the quantization ''noise power''. Adding one bit to the quantizer halves the value of Δ, which reduces the noise power by the factor ¼. In terms of [[decibels]], the noise power change is <math>\scriptstyle 10\cdot \log_{10}(1/4)\ \approx\ -6\ \mathrm{dB}.</math>
 
The essential property of a quantizer is having a countable- set of possible output-values membersvalues smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size <math>\Delta</math> is equal to 1. With <math>\Delta = 1</math> or with <math>\Delta</math> equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs.
 
When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the [[mean squared error]] produced by such a rounding operation will be approximately <math>\Delta^2/ 12</math>.<ref name=Sheppard>{{cite journal | last=Sheppard | first=W. F. |author-link=William Fleetwood Sheppard| title=On the Calculation of the most Probable Values of Frequency-Constants, for Data arranged according to Equidistant Division of a Scale | journal=Proceedings of the London Mathematical Society | publisher=Wiley | volume=s1-29 | issue=1 | year=1897 | issn=0024-6115 | doi=10.1112/plms/s1-29.1.353 | pages=353–380| url=https://zenodo.org/record/1447738 }}</ref><ref name=Bennett>W. R. Bennett, "[http://www.alcatel-lucent.com/bstj/vol27-1948/articles/bstj27-3-446.pdf Spectra of Quantized Signals]", ''[[Bell System Technical Journal]]'', Vol. 27, pp. 446–472, July 1948.</ref><ref name=OliverPierceShannon>{{cite journal | last1=Oliver | first1=B.M. | last2=Pierce | first2=J.R. | last3=Shannon | first3=C.E. |author-link3=Claude Shannon| title=The Philosophy of PCM | journal=Proceedings of the IRE | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=36 | issue=11 | year=1948 | issn=0096-8390 | doi=10.1109/jrproc.1948.231941 | pages=1324–1331| s2cid=51663786 }}</ref><ref name=Stein>Seymour Stein and J. Jay Jones, ''[https://books.google.com/books/about/Modern_communication_principles.html?id=jBc3AQAAIAAJ Modern Communication Principles]'', [[McGraw–Hill]], {{ISBN|978-0-07-061003-3}}, 1967 (p. 196).</ref><ref name=GishPierce>{{cite journal | last1=Gish | first1=H. | last2=Pierce | first2=J. | title=Asymptotically efficient quantizing | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=14 | issue=5 | year=1968 | issn=0018-9448 | doi=10.1109/tit.1968.1054193 | pages=676–683}}</ref><ref name=GrayNeuhoff>{{cite journal | last1=Gray | first1=R.M. |author-link=Robert M. Gray| last2=Neuhoff | first2=D.L. | title=Quantization | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=44 | issue=6 | year=1998 | issn=0018-9448 | doi=10.1109/18.720541 | pages=2325–2383| s2cid=212653679 }}</ref> Mean squared error is also called the quantization ''noise power''. Adding one bit to the quantizer halves the value of Δ, which reduces the noise power by the factor ¼{{sfrac|1|4}}. In terms of [[decibelsdecibel]]s, the noise power change is <math>\scriptstyle 10\cdot \log_{10}(1/4)\ \approx\ -6\ \mathrm{dB}.</math>
 
Because the set of possible output values of a quantizer is countable, any quantizer can be decomposed into two distinct stages, which can be referred to as the ''classification'' stage (or ''forward quantization'' stage) and the ''reconstruction'' stage (or ''inverse quantization'' stage), where the classification stage maps the input value to an integer ''quantization index'' <math>k</math> and the reconstruction stage maps the index <math>k</math> to the ''reconstruction value'' <math>y_k</math> that is the output approximation of the input value. For the example uniform quantizer described above, the forward quantization stage can be expressed as
Line 27 ⟶ 32:
 
==Mathematical properties==
Because quantization is a many-to-few mapping, it is an inherently [[Nonlinear system|non-linear]] and irreversible process (i.e., because the same output value is shared by multiple input values, it is impossible, in general, to recover the exact input value when given only the output value).
 
The set of possible input values may be infinitely large, and may possibly be continuous and therefore [[uncountable]] (such as the set of all real numbers, or all real numbers within some limited range). The set of possible output values may be [[finite set|finite]] or [[Countable set|countably infinite]].<ref name=GrayNeuhoff/> The input and output sets involved in quantization can be defined in a rather general way. For example, vector quantization is the application of quantization to multi-dimensional (vector-valued) input data.<ref>{{cite book |author1=Allen Gersho |author-link=Allen Gersho |author2=Robert M. Gray |author-link2=Robert M. Gray |url=https://books.google.com/books?id=DwcDm6xgItUC |title=Vector Quantization and Signal Compression |publisher=[[Springer Science+Business Media|Springer]] |isbn=978-0-7923-9181-4 |date=1991}}</ref>
 
==Types==
[[File:2-bit resolution analog comparison.png|thumbnail|2-bit resolution with four levels of quantization compared to analog.<ref>Hodgson, Jay (2010). ''Understanding Records'', p.56. {{ISBN|978-1-4411-5607-5}}. Adapted from Franz, David (2004). ''Recording and Producing in the Home Studio'', p.38-9. Berklee Press.</ref>]]
[[File:3-bit resolution analog comparison.png|thumbnail|3-bit resolution with eight levels.]]
 
===Analog-to-digital converter===
An [[analog-to-digital converter]] (ADC) can be modeled as two processes: [[Sampling (signal processing)|sampling]] and quantization. Sampling converts a time-varying voltage signal into a [[discrete-time signal]], a sequence of real numbers. Quantization replaces each real number with an approximation from a finite set of discrete values. Most commonly, these discrete values are represented as fixed-point words. Though any number of quantization levels is possible, common word- lengths are [[audio bit depth|8-bit]] (256 levels), 16-bit (65,536 levels) and 24-bit (16.8&nbsp;million levels). Quantizing a sequence of numbers produces a sequence of quantization errors which is sometimes modeled as an additive random signal called '''quantization noise''' because of its [[stochastic]] behavior. The more levels a quantizer uses, the lower is its quantization noise power.
 
===Rate–distortion optimization===
''[[Rate–distortion theory|Rate–distortion optimized]]'' quantization is encountered in [[source coding]] for lossy data compression algorithms, where the purpose is to manage distortion within the limits of the [[bit rate]] supported by a communication channel or storage medium. The analysis of quantization in this context involves studying the amount of data (typically measured in digits or bits or bit ''rate'') that is used to represent the output of the quantizer, and studying the loss of precision that is introduced by the quantization process (which is referred to as the ''distortion'').
 
===Mid-riser and mid-tread uniform quantizers===
Most uniform quantizers for signed input data can be classified as being of one of two types: ''mid-riser'' and ''mid-tread''. The terminology is based on what happens in the region around the value 0, and uses the analogy of viewing the input-output function of the quantizer as a [[stairway]]. Mid-tread quantizers have a zero-valued reconstruction level (corresponding to a ''tread'' of a stairway), while mid-riser quantizers have a zero-valued classification threshold (corresponding to a ''[[Stair riser|riser]]'' of a stairway).<ref name=Gersho77>{{cite journal | last=Gersho | first=A. |author-link=Allen Gersho | title=Quantization | journal=IEEE Communications Society Magazine | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=15 | issue=5 | year=1977 | issn=0148-9615 | doi=10.1109/mcom.1977.1089500 | pages=16–28| s2cid=260498692 }}</ref>
 
Mid-tread quantization involves rounding. The formulas for mid-tread uniform quantization are provided in the previous section.
 
:<math>Q(x) = \Delta \cdot \left\lfloor \frac{x}{\Delta} + \frac{1}{2} \right\rfloor</math>,
 
Mid-riser quantization involves truncation. The input-output formula for a mid-riser uniform quantizer is given by:
Line 62 ⟶ 69:
where the function {{no break|<math>\sgn</math>(&nbsp;)}} is the [[sign function]] (also known as the ''signum'' function). The general reconstruction rule for such a dead-zone quantizer is given by
:<math>y_k = \sgn(k) \cdot\left(\frac{w}{2}+\Delta\cdot (|k|-1+r_k)\right)</math>,
where <math>r_k</math> is a reconstruction offset value in the range of 0 to 1 as a fraction of the step size. Ordinarily, <math>0 \le r_k \le \tfrac1{2}</math> when quantizing input data with a typical [[probability density function]] (PDF) that is symmetric around zero and reaches its peak value at zero (such as a [[Gaussian distribution|Gaussian]], [[Laplacian distribution|Laplacian]], or [[generalized Gaussian distribution|generalized Gaussian]] PDF). Although <math>r_k</math> may depend on <math>k</math> in general, and can be chosen to fulfill the optimality condition described below, it is often simply set to a constant, such as <math>\tfrac1{2}</math>. (Note that in this definition, <math>y_0 = 0</math> due to the definition of the {{no break|<math>\sgn</math>(&nbsp;)}} function, so <math>r_0</math> has no effect.)
 
A very commonly used special case (e.g., the scheme typically used in financial accounting and elementary mathematics) is to set <math>w=\Delta</math> and <math>r_k=\tfrac1{2}</math> for all <math>k</math>. In this case, the dead-zone quantizer is also a uniform quantizer, since the central dead-zone of this quantizer has the same width as all of its other steps, and all of its reconstruction values are equally spaced as well.
Line 69 ⟶ 76:
 
===Additive noise model===
A common assumption for the analysis of [[quantization error]] is that it affects a signal processing system in a similar manner to that of additive [[white noise]] – having negligible correlation with the signal and an approximately flat [[power spectral density]].<ref name=Bennett/><ref name=GrayNeuhoff/><ref name=Widrow1>{{cite journal | last=Widrow | first=B. |author-link=Bernard Widrow| title=A Study of Rough Amplitude Quantization by Means of Nyquist Sampling Theory | journal=IRE Transactions on Circuit Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=3 | issue=4 | year=1956 | issn=0096-2007 | doi=10.1109/tct.1956.1086334 | pages=266–276| hdl=1721.1/12139 | s2cid=16777461 | hdl-access=free }}</ref><ref name=Widrow2>[[Bernard{{cite journal |last1=Widrow]], "[http://www-isl|first1=B.stanford.edu/~widrow/papers/j1961statisticalanalysis.pdf |author-link=Bernard Widrow |title=Statistical analysis of amplitude -quantized sampled -data systems]", ''Trans.|journal=Transactions AIEEof Pt.the American Institute of Electrical Engineers, Part II: Appl.Applications Ind.'',and Vol.Industry |date=1961 |volume=79, pp.|issue=6 |pages=555–568, Jan|url=http://www-isl.stanford.edu/~widrow/papers/j1961statisticalanalysis.pdf |doi=10.1109/TAI.1961.6371702 |archive-date=2011-04-01 |access-date=2012-08-17 |archive-url=https://web.archive.org/web/20110401025420/http://www-isl.stanford.edu/~widrow/papers/j1961statisticalanalysis.pdf |url-status=dead }}</ref> The additive noise model is commonly used for the analysis of quantization error effects in digital filtering systems, and it can be very useful in such analysis. It has been shown to be a valid model in cases of high -resolution quantization (small <math>\Delta</math> relative to the signal strength) with smooth PDFs.<ref name=Bennett/><ref name=MarcoNeuhoff>{{cite journal | last1=Marco | first1=D. | last2=Neuhoff | first2=D.L. | title=The Validity of the Additive Noise Model for Uniform Scalar Quantizers | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=51 | issue=5 | year=2005 | issn=0018-9448 | doi=10.1109/tit.2005.846397 | pages=1739–1755| s2cid=14819261 }}</ref>
 
Additive noise behavior is not always a valid assumption. Quantization error (for quantizers defined as described here) is deterministically related to the signal and not entirely independent of it. Thus, periodic signals can create periodic quantization noise. And in some cases, it can even cause [[limit cycle]]s to appear in digital signal processing systems. One way to ensure effective independence of the quantization error from the source signal is to perform ''[[dither]]ed quantization'' (sometimes with ''[[noise shaping]]''), which involves adding random (or [[pseudo-random]]) noise to the signal prior to quantization.<ref name=GrayNeuhoff/><ref name=Widrow2/>
 
===Quantization error models===
In the typical case, the original signal is much larger than one [[least significant bit]] (LSB). When this is the case, the quantization error is not significantly correlated with the signal, and has an approximately [[uniform distribution (continuous)|uniform distribution]]. When rounding is used to quantize, the quantization error has a [[mean]] of zero and the [[root mean square]] (RMS) value is the [[standard deviation]] of this distribution, given by <math>\scriptstyle {\frac{1}{\sqrt{12}}}\mathrm{LSB}\ \approx\ 0.289\,\mathrm{LSB}</math>. When truncation is used, the error has a non-zero mean of <math>\scriptstyle {\frac{1}{2}}\mathrm{LSB}</math> and the RMS value is <math>\scriptstyle {\frac{1}{\sqrt{3}}}\mathrm{LSB}</math>. Although rounding yields less RMS error than truncation, the difference is only due to the static (DC) term of <math>\scriptstyle {\frac{1}{2}}\mathrm{LSB}</math><nowiki>. The RMS values of the AC error are exactly the same in both cases, so there is no special advantage of rounding over truncation in situations where the DC term of the error can be ignored (such as in AC -coupled systems). In either case, the standard deviation, as a percentage of the full signal range, changes by a factor of 2 for each 1-bit change in the number of quantization bits. The potential signal-to-quantization-noise power ratio therefore changes by 4, or </nowiki><math>\scriptstyle 10\cdot \log_{10}(4)</math>, approximately 6&nbsp;dB per bit.
 
At lower amplitudes the quantization error becomes dependent on the input signal, resulting in distortion. This distortion is created after the anti-aliasing filter, and if these distortions are above 1/2 the sample rate they will alias back into the band of interest. In order to make the quantization error independent of the input signal, the signal is dithered by adding noise to the signal. This slightly reduces signal -to -noise ratio, but can completely eliminate the distortion.
 
===Quantization noise model===
[[File:Frequency spectrum of a sinusoid and its quantization noise floor.gif|thumb|300px|Comparison of quantizing a sinusoid to 64 levels (6 bits) and 256 levels (8 bits). The additive noise created by 6-bit quantization is 12 dB greater than the noise created by 8-bit quantization. When the spectral distribution is flat, as in this example, the 12 dB difference manifests as a measurable difference in the noise floors.]]
 
Quantization noise is a [[Model (abstract)|model]] of quantization error introduced by quantization in the ADC. It is a rounding error between the analog input voltage to the ADC and the output digitized value. The noise is non-linear and signal-dependent. It can be modelledmodeled in several different ways.
 
In an ideal ADC, where the quantization error is uniformly distributed between −1/2 LSB and +1/2 LSB, and the signal has a uniform distribution covering all quantization levels, the [[Signal-to-quantization-noise ratio]] (SQNR) can be calculated from
Line 147 ⟶ 154:
# Given a maximum bit rate constraint <math>R \le R_\max</math>, minimize the distortion <math>D</math>
 
Often the solution to these problems can be equivalently (or approximately) expressed and solved by converting the formulation to the unconstrained problem <math>\min\left\{ D + \lambda \cdot R \right\}</math> where the [[Lagrange multiplier]] <math>\lambda</math> is a non-negative constant that establishes the appropriate balance between rate and distortion. Solving the unconstrained problem is equivalent to finding a point on the [[convex hull]] of the family of solutions to an equivalent constrained formulation of the problem. However, finding a solution – especially a [[Closed-form expression|closed-form]] solution – to any of these three problem formulations can be difficult. Solutions that do not require multi-dimensional iterative optimization techniques have been published for only three PDFs: the uniform,<ref>{{cite journal | last1=Farvardin | first1=N. |author-link=Nariman Farvardin| last2=Modestino | first2=J. | title=Optimum quantizer performance for a class of non-Gaussian memoryless sources | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=30 | issue=3 | year=1984 | issn=0018-9448 | doi=10.1109/tit.1984.1056920 | pages=485–497}}(Section VI.C and Appendix B)</ref> [[Exponential distribution|exponential]],<ref name=SullivanIT>{{cite journal | last=Sullivan | first=G.J. |author-link=Gary Sullivan (engineer)| title=Efficient scalar quantization of exponential and Laplacian random variables | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=42 | issue=5 | year=1996 | issn=0018-9448 | doi=10.1109/18.532878 | pages=1365–1374}}</ref> and [[Laplace distribution|Laplacian]]<ref name=SullivanIT/> distributions. Iterative optimization approaches can be used to find solutions in other cases.<ref name=GrayNeuhoff/><ref name=Berger72>{{cite journal | last=Berger | first=T. |author-link=Toby Berger| title=Optimum quantizers and permutation codes | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=18 | issue=6 | year=1972 | issn=0018-9448 | doi=10.1109/tit.1972.1054906 | pages=759–765}}</ref><ref name=Berger82>{{cite journal | last=Berger | first=T. |author-link=Toby Berger| title=Minimum entropy quantizers and permutation codes | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=28 | issue=2 | year=1982 | issn=0018-9448 | doi=10.1109/tit.1982.1056456 | pages=149–157}}</ref>
 
Note that the reconstruction values <math>\{y_k\}_{k=1}^{M}</math> affect only the distortion – they do not affect the bit rate – and that each individual <math>y_k</math> makes a separate contribution <math> d_k </math> to the total distortion as shown below:
Line 175 ⟶ 182:
:<math> D=E[(x-Q(x))^2] = \int_{-\infty}^{\infty} (x-Q(x))^2f(x)dx = \sum_{k=1}^{M} \int_{b_{k-1}}^{b_k} (x-y_k)^2 f(x)dx =\sum_{k=1}^{M} d_k </math>.
 
Finding an optimal solution to the above problem results in a quantizer sometimes called a MMSQE (minimum mean-square quantization error) solution, and the resulting PDF-optimized (non-uniform) quantizer is referred to as a ''Lloyd–Max'' quantizer, named after two people who independently developed iterative methods<ref name=GrayNeuhoff/><ref>{{cite journal | last=Lloyd | first=S. | title=Least squares quantization in PCM | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=28 | issue=2 | year=1982 | issn=0018-9448 | doi=10.1109/tit.1982.1056489 | pages=129–137| s2cid=10833328 | citeseerx=10.1.1.131.1338 }} (work documented in a manuscript circulated for comments at [[Bell Labs|Bell Laboratories]] with a department log date of 31 July 1957 and also presented at the 1957 meeting of the [[Institute of Mathematical Statistics]], although not formally published until 1982).</ref><ref>{{cite journal | last=Max | first=J. | title=Quantizing for minimum distortion | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=6 | issue=1 | year=1960 | issn=0018-9448 | doi=10.1109/tit.1960.1057548 | pages=7–12| bibcode=1960ITIT....6....7M }}</ref> to solve the two sets of simultaneous equations resulting from <math> {\partial D / \partial b_k} = 0 </math> and <math>{\partial D/ \partial y_k} = 0 </math>, as follows:
:<math> {\partial D \over\partial b_k} = 0 \Rightarrow b_k = {y_k + y_{k+1} \over 2} </math>,
which places each threshold at the midpoint between each pair of reconstruction values, and
Line 181 ⟶ 188:
which places each reconstruction value at the centroid (conditional expected value) of its associated classification interval.
 
[[Lloyd's algorithm|Lloyd's Method I algorithm]], originally described in 1957, can be generalized in a straightforward way for application to vector data. This generalization results in the [[Linde–Buzo–Gray algorithm|Linde–Buzo–Gray (LBG)]] or [[k-means clustering|k-means]] classifier optimization methods. Moreover, the technique can be further generalized in a straightforward way to also include an entropy constraint for vector data.<ref name=ChouLookabaughGray>{{cite journal | last1=Chou | first1=P.A. | last2=Lookabaugh | first2=T. | last3=Gray | first3=R.M. |author-link3=Robert M. Gray| title=Entropy-constrained vector quantization | journal=IEEE Transactions on Acoustics, Speech, and Signal Processing | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=37 | issue=1 | year=1989 | issn=0096-3518 | doi=10.1109/29.17498 | pages=31–42}}</ref>
 
===Uniform quantization and the 6&nbsp;dB/bit approximation===
Line 193 ⟶ 200:
<math>{\rm SQNR}= 20\log_{10}{2^N} = N\cdot(20\log_{10}2) = N\cdot 6.0206\,\rm{dB}</math>,
 
or approximately 6&nbsp;dB per bit. For example, for <math>N</math>=8 bits, <math>M</math>=256 levels and SQNR = 8&times;68×6 = 48&nbsp;dB; and for <math>N</math>=16 bits, <math>M</math>=65536 and SQNR = 16&times;616×6 = 96&nbsp;dB. The property of 6&nbsp;dB improvement in SQNR for each extra bit used in quantization is a well-known figure of merit. However, it must be used with care: this derivation is only for a uniform quantizer applied to a uniform source. For other source PDFs and other quantizer designs, the SQNR may be somewhat different from that predicted by 6&nbsp;dB/bit, depending on the type of PDF, the type of source, the type of quantizer, and the bit rate range of operation.
 
However, it is common to assume that for many sources, the slope of a quantizer SQNR function can be approximated as 6&nbsp;dB/bit when operating at a sufficiently high bit rate. At asymptotically high bit rates, cutting the step size in half increases the bit rate by approximately 1 bit per sample (because 1 bit is needed to indicate whether the value is in the left or right half of the prior double-sized interval) and reduces the mean squared error by a factor of 4 (i.e., 6&nbsp;dB) based on the <math>\Delta^2/12</math> approximation.
Line 202 ⟶ 209:
==In other fields==
{{See also|Quantum noise|Quantum limit}}
Many physical quantities are actually quantized by physical entities. Examples of fields where this limitation applies include [[electronics]] (due to [[electronselectron]]s), [[optics]] (due to [[photonsphoton]]s), [[biology]] (due to [[DNA]]), [[physics]] (due to [[Planck limits]]) and [[chemistry]] (due to [[moleculesmolecule]]s).
 
==See also==
Line 210 ⟶ 217:
* [[Discretization]]
* [[Discretization error]]
* [[Least count]]
* [[Posterization]]
* [[Pulse-code modulation]]
Line 230 ⟶ 238:
 
==Further reading==
* {{cite book |url=http://www.mit.bme.hu/books/quantization/ |title=Quantization noise in Digital Computation, Signal Processing, and Control |author1=Bernard Widrow |author2=István Kollár |date=2007 |publisher=Cambridge University Press |isbn=9780521886710}}
 
{{DSP}}
Line 243 ⟶ 251:
[[Category:Signal processing]]
[[Category:Telecommunication theory]]
[[Category:Data compression]]