Quantization (signal processing): Difference between revisions

Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(2 intermediate revisions by one other user not shown)
Line 22:
The essential property of a quantizer is having a countable set of possible output values smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size <math>\Delta</math> is equal to 1. With <math>\Delta = 1</math> or with <math>\Delta</math> equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs.
 
When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the [[mean squared error]] produced by such a rounding operation will be approximately <math>\Delta^2/ 12</math>.<ref name=Sheppard>{{cite journal | last=Sheppard | first=W. F. |author-link=William Fleetwood Sheppard| title=On the Calculation of the most Probable Values of Frequency-Constants, for Data arranged according to Equidistant Division of a Scale | journal=Proceedings of the London Mathematical Society | publisher=Wiley | volume=s1-29 | issue=1 | year=1897 | issn=0024-6115 | doi=10.1112/plms/s1-29.1.353 | pages=353–380| url=https://zenodo.org/record/1447738 }}</ref><ref name=Bennett>W. R. Bennett, "[http://www.alcatel-lucent.com/bstj/vol27-1948/articles/bstj27-3-446.pdf Spectra of Quantized Signals]", ''[[Bell System Technical Journal]]'', Vol. 27, pp. 446–472, July 1948.</ref><ref name=OliverPierceShannon>{{cite journal | last1=Oliver | first1=B.M. | last2=Pierce | first2=J.R. | last3=Shannon | first3=C.E. |author-link3=Claude Shannon| title=The Philosophy of PCM | journal=Proceedings of the IRE | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=36 | issue=11 | year=1948 | issn=0096-8390 | doi=10.1109/jrproc.1948.231941 | pages=1324–1331| s2cid=51663786 }}</ref><ref name=Stein>Seymour Stein and J. Jay Jones, ''[https://books.google.com/books/about/Modern_communication_principles.html?id=jBc3AQAAIAAJ Modern Communication Principles]'', [[McGraw–Hill]], {{ISBN|978-0-07-061003-3}}, 1967 (p. 196).</ref><ref name=GishPierce>{{cite journal | last1=Gish | first1=H. | last2=Pierce | first2=J. | title=Asymptotically efficient quantizing | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=14 | issue=5 | year=1968 | issn=0018-9448 | doi=10.1109/tit.1968.1054193 | pages=676–683}}</ref><ref name=GrayNeuhoff>{{cite journal | last1=Gray | first1=R.M. |author-link=Robert M. Gray| last2=Neuhoff | first2=D.L. | title=Quantization | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=44 | issue=6 | year=1998 | issn=0018-9448 | doi=10.1109/18.720541 | pages=2325–2383| s2cid=212653679 }}</ref> Mean squared error is also called the quantization ''noise power''. Adding one bit to the quantizer halves the value of Δ, which reduces the noise power by the factor {{sfrac|1|4}}. In terms of [[decibel]]s, the noise power change is <math>\scriptstyle 10\cdot \log_{10}(1/4)\ \approx\ -6\ \mathrm{dB}.</math>
 
Because the set of possible output values of a quantizer is countable, any quantizer can be decomposed into two distinct stages, which can be referred to as the ''classification'' stage (or ''forward quantization'' stage) and the ''reconstruction'' stage (or ''inverse quantization'' stage), where the classification stage maps the input value to an integer ''quantization index'' <math>k</math> and the reconstruction stage maps the index <math>k</math> to the ''reconstruction value'' <math>y_k</math> that is the output approximation of the input value. For the example uniform quantizer described above, the forward quantization stage can be expressed as
Line 76:
 
===Additive noise model===
A common assumption for the analysis of quantization error is that it affects a signal processing system in a similar manner to that of additive [[white noise]] – having negligible correlation with the signal and an approximately flat [[power spectral density]].<ref name=Bennett/><ref name=GrayNeuhoff/><ref name=Widrow1>{{cite journal | last=Widrow | first=B. |author-link=Bernard Widrow| title=A Study of Rough Amplitude Quantization by Means of Nyquist Sampling Theory | journal=IRE Transactions on Circuit Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=3 | issue=4 | year=1956 | issn=0096-2007 | doi=10.1109/tct.1956.1086334 | pages=266–276| hdl=1721.1/12139 | s2cid=16777461 | hdl-access=free }}</ref><ref name=Widrow2>[[Bernard{{cite journal |last1=Widrow]], "[http://www-isl|first1=B.stanford.edu/~widrow/papers/j1961statisticalanalysis.pdf |author-link=Bernard Widrow |title=Statistical analysis of amplitude -quantized sampled -data systems]", ''Trans.|journal=Transactions AIEEof Pt.the American Institute of Electrical Engineers, Part II: Appl.Applications Ind.'',and Vol.Industry |date=1961 |volume=79, pp.|issue=6 |pages=555–568, Jan|url=http://www-isl.stanford.edu/~widrow/papers/j1961statisticalanalysis.pdf |doi=10.1109/TAI.1961.6371702 |archive-date=2011-04-01 |access-date=2012-08-17 |archive-url=https://web.archive.org/web/20110401025420/http://www-isl.stanford.edu/~widrow/papers/j1961statisticalanalysis.pdf |url-status=dead }}</ref> The additive noise model is commonly used for the analysis of quantization error effects in digital filtering systems, and it can be very useful in such analysis. It has been shown to be a valid model in cases of high-resolution quantization (small <math>\Delta</math> relative to the signal strength) with smooth PDFs.<ref name=Bennett/><ref name=MarcoNeuhoff>{{cite journal | last1=Marco | first1=D. | last2=Neuhoff | first2=D.L. | title=The Validity of the Additive Noise Model for Uniform Scalar Quantizers | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=51 | issue=5 | year=2005 | issn=0018-9448 | doi=10.1109/tit.2005.846397 | pages=1739–1755| s2cid=14819261 }}</ref>
 
Additive noise behavior is not always a valid assumption. Quantization error (for quantizers defined as described here) is deterministically related to the signal and not entirely independent of it. Thus, periodic signals can create periodic quantization noise. And in some cases, it can even cause [[limit cycle]]s to appear in digital signal processing systems. One way to ensure effective independence of the quantization error from the source signal is to perform ''[[dither]]ed quantization'' (sometimes with ''[[noise shaping]]''), which involves adding random (or [[pseudo-random]]) noise to the signal prior to quantization.<ref name=GrayNeuhoff/><ref name=Widrow2/>
Line 154:
# Given a maximum bit rate constraint <math>R \le R_\max</math>, minimize the distortion <math>D</math>
 
Often the solution to these problems can be equivalently (or approximately) expressed and solved by converting the formulation to the unconstrained problem <math>\min\left\{ D + \lambda \cdot R \right\}</math> where the [[Lagrange multiplier]] <math>\lambda</math> is a non-negative constant that establishes the appropriate balance between rate and distortion. Solving the unconstrained problem is equivalent to finding a point on the [[convex hull]] of the family of solutions to an equivalent constrained formulation of the problem. However, finding a solution – especially a [[Closed-form expression|closed-form]] solution – to any of these three problem formulations can be difficult. Solutions that do not require multi-dimensional iterative optimization techniques have been published for only three PDFs: the uniform,<ref>{{cite journal | last1=Farvardin | first1=N. |author-link=Nariman Farvardin| last2=Modestino | first2=J. | title=Optimum quantizer performance for a class of non-Gaussian memoryless sources | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=30 | issue=3 | year=1984 | issn=0018-9448 | doi=10.1109/tit.1984.1056920 | pages=485–497}}(Section VI.C and Appendix B)</ref> [[Exponential distribution|exponential]],<ref name=SullivanIT>{{cite journal | last=Sullivan | first=G.J. |author-link=Gary Sullivan (engineer)| title=Efficient scalar quantization of exponential and Laplacian random variables | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=42 | issue=5 | year=1996 | issn=0018-9448 | doi=10.1109/18.532878 | pages=1365–1374}}</ref> and [[Laplace distribution|Laplacian]]<ref name=SullivanIT/> distributions. Iterative optimization approaches can be used to find solutions in other cases.<ref name=GrayNeuhoff/><ref name=Berger72>{{cite journal | last=Berger | first=T. |author-link=Toby Berger| title=Optimum quantizers and permutation codes | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=18 | issue=6 | year=1972 | issn=0018-9448 | doi=10.1109/tit.1972.1054906 | pages=759–765}}</ref><ref name=Berger82>{{cite journal | last=Berger | first=T. |author-link=Toby Berger| title=Minimum entropy quantizers and permutation codes | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=28 | issue=2 | year=1982 | issn=0018-9448 | doi=10.1109/tit.1982.1056456 | pages=149–157}}</ref>
 
Note that the reconstruction values <math>\{y_k\}_{k=1}^{M}</math> affect only the distortion – they do not affect the bit rate – and that each individual <math>y_k</math> makes a separate contribution <math> d_k </math> to the total distortion as shown below:
Line 182:
:<math> D=E[(x-Q(x))^2] = \int_{-\infty}^{\infty} (x-Q(x))^2f(x)dx = \sum_{k=1}^{M} \int_{b_{k-1}}^{b_k} (x-y_k)^2 f(x)dx =\sum_{k=1}^{M} d_k </math>.
 
Finding an optimal solution to the above problem results in a quantizer sometimes called a MMSQE (minimum mean-square quantization error) solution, and the resulting PDF-optimized (non-uniform) quantizer is referred to as a ''Lloyd–Max'' quantizer, named after two people who independently developed iterative methods<ref name=GrayNeuhoff/><ref>{{cite journal | last=Lloyd | first=S. | title=Least squares quantization in PCM | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=28 | issue=2 | year=1982 | issn=0018-9448 | doi=10.1109/tit.1982.1056489 | pages=129–137| s2cid=10833328 | citeseerx=10.1.1.131.1338 }} (work documented in a manuscript circulated for comments at [[Bell Laboratories]] with a department log date of 31 July 1957 and also presented at the 1957 meeting of the [[Institute of Mathematical Statistics]], although not formally published until 1982).</ref><ref>{{cite journal | last=Max | first=J. | title=Quantizing for minimum distortion | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=6 | issue=1 | year=1960 | issn=0018-9448 | doi=10.1109/tit.1960.1057548 | pages=7–12| bibcode=1960ITIT....6....7M }}</ref> to solve the two sets of simultaneous equations resulting from <math> {\partial D / \partial b_k} = 0 </math> and <math>{\partial D/ \partial y_k} = 0 </math>, as follows:
:<math> {\partial D \over\partial b_k} = 0 \Rightarrow b_k = {y_k + y_{k+1} \over 2} </math>,
which places each threshold at the midpoint between each pair of reconstruction values, and
Line 188:
which places each reconstruction value at the centroid (conditional expected value) of its associated classification interval.
 
[[Lloyd's algorithm|Lloyd's Method I algorithm]], originally described in 1957, can be generalized in a straightforward way for application to vector data. This generalization results in the [[Linde–Buzo–Gray algorithm|Linde–Buzo–Gray (LBG)]] or [[k-means]] classifier optimization methods. Moreover, the technique can be further generalized in a straightforward way to also include an entropy constraint for vector data.<ref name=ChouLookabaughGray>{{cite journal | last1=Chou | first1=P.A. | last2=Lookabaugh | first2=T. | last3=Gray | first3=R.M. |author-link3=Robert M. Gray| title=Entropy-constrained vector quantization | journal=IEEE Transactions on Acoustics, Speech, and Signal Processing | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=37 | issue=1 | year=1989 | issn=0096-3518 | doi=10.1109/29.17498 | pages=31–42}}</ref>
 
===Uniform quantization and the 6&nbsp;dB/bit approximation===