Content deleted Content added
softer landing for redirect |
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5 |
||
(7 intermediate revisions by 4 users not shown) | |||
Line 6:
'''Quantization''', in mathematics and [[digital signal processing]], is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite [[number of elements]]. [[Rounding]] and [[truncation]] are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all [[lossy compression]] algorithms.
The difference between an input value and its quantized value (such as [[round-off error]]) is referred to as '''quantization error''', '''noise''' or '''
==Example==
Line 22:
The essential property of a quantizer is having a countable set of possible output values smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size <math>\Delta</math> is equal to 1. With <math>\Delta = 1</math> or with <math>\Delta</math> equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs.
When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the [[mean squared error]] produced by such a rounding operation will be approximately <math>\Delta^2/ 12</math>.<ref name=Sheppard>{{cite journal | last=Sheppard | first=W. F. |author-link=William Fleetwood Sheppard| title=On the Calculation of the most Probable Values of Frequency-Constants, for Data arranged according to Equidistant Division of a Scale | journal=Proceedings of the London Mathematical Society | publisher=Wiley | volume=s1-29 | issue=1 | year=1897 | issn=0024-6115 | doi=10.1112/plms/s1-29.1.353 | pages=353–380| url=https://zenodo.org/record/1447738 }}</ref><ref name=Bennett>W. R. Bennett, "[http://www.alcatel-lucent.com/bstj/vol27-1948/articles/bstj27-3-446.pdf Spectra of Quantized Signals]", ''[[Bell System Technical Journal]]'', Vol. 27, pp. 446–472, July 1948.</ref><ref name=OliverPierceShannon>{{cite journal | last1=Oliver | first1=B.M. | last2=Pierce | first2=J.R. | last3=Shannon | first3=C.E. |author-link3=Claude Shannon| title=The Philosophy of PCM | journal=Proceedings of the IRE
Because the set of possible output values of a quantizer is countable, any quantizer can be decomposed into two distinct stages, which can be referred to as the ''classification'' stage (or ''forward quantization'' stage) and the ''reconstruction'' stage (or ''inverse quantization'' stage), where the classification stage maps the input value to an integer ''quantization index'' <math>k</math> and the reconstruction stage maps the index <math>k</math> to the ''reconstruction value'' <math>y_k</math> that is the output approximation of the input value. For the example uniform quantizer described above, the forward quantization stage can be expressed as
Line 47:
===Mid-riser and mid-tread uniform quantizers===
Most uniform quantizers for signed input data can be classified as being of one of two types: ''mid-riser'' and ''mid-tread''. The terminology is based on what happens in the region around the value 0, and uses the analogy of viewing the input-output function of the quantizer as a [[stairway]]. Mid-tread quantizers have a zero-valued reconstruction level (corresponding to a ''tread'' of a stairway), while mid-riser quantizers have a zero-valued classification threshold (corresponding to a ''[[Stair riser|riser]]'' of a stairway).<ref name=Gersho77>{{cite journal | last=Gersho | first=A. |author-link=Allen Gersho | title=Quantization | journal=IEEE Communications Society Magazine
Mid-tread quantization involves rounding. The formulas for mid-tread uniform quantization are provided in the previous section.
Line 76:
===Additive noise model===
A common assumption for the analysis of quantization error is that it affects a signal processing system in a similar manner to that of additive [[white noise]] – having negligible correlation with the signal and an approximately flat [[power spectral density]].<ref name=Bennett/><ref name=GrayNeuhoff/><ref name=Widrow1>{{cite journal | last=Widrow | first=B. |author-link=Bernard Widrow| title=A Study of Rough Amplitude Quantization by Means of Nyquist Sampling Theory | journal=IRE Transactions on Circuit Theory
Additive noise behavior is not always a valid assumption. Quantization error (for quantizers defined as described here) is deterministically related to the signal and not entirely independent of it. Thus, periodic signals can create periodic quantization noise. And in some cases, it can even cause [[limit cycle]]s to appear in digital signal processing systems. One way to ensure effective independence of the quantization error from the source signal is to perform ''[[dither]]ed quantization'' (sometimes with ''[[noise shaping]]''), which involves adding random (or [[pseudo-random]]) noise to the signal prior to quantization.<ref name=GrayNeuhoff/><ref name=Widrow2/>
Line 154:
# Given a maximum bit rate constraint <math>R \le R_\max</math>, minimize the distortion <math>D</math>
Often the solution to these problems can be equivalently (or approximately) expressed and solved by converting the formulation to the unconstrained problem <math>\min\left\{ D + \lambda \cdot R \right\}</math> where the [[Lagrange multiplier]] <math>\lambda</math> is a non-negative constant that establishes the appropriate balance between rate and distortion. Solving the unconstrained problem is equivalent to finding a point on the [[convex hull]] of the family of solutions to an equivalent constrained formulation of the problem. However, finding a solution – especially a [[Closed-form expression|closed-form]] solution – to any of these three problem formulations can be difficult. Solutions that do not require multi-dimensional iterative optimization techniques have been published for only three PDFs: the uniform,<ref>{{cite journal | last1=Farvardin | first1=N. |author-link=Nariman Farvardin| last2=Modestino | first2=J. | title=Optimum quantizer performance for a class of non-Gaussian memoryless sources | journal=IEEE Transactions on Information Theory
Note that the reconstruction values <math>\{y_k\}_{k=1}^{M}</math> affect only the distortion – they do not affect the bit rate – and that each individual <math>y_k</math> makes a separate contribution <math> d_k </math> to the total distortion as shown below:
Line 182:
:<math> D=E[(x-Q(x))^2] = \int_{-\infty}^{\infty} (x-Q(x))^2f(x)dx = \sum_{k=1}^{M} \int_{b_{k-1}}^{b_k} (x-y_k)^2 f(x)dx =\sum_{k=1}^{M} d_k </math>.
Finding an optimal solution to the above problem results in a quantizer sometimes called a MMSQE (minimum mean-square quantization error) solution, and the resulting PDF-optimized (non-uniform) quantizer is referred to as a ''Lloyd–Max'' quantizer, named after two people who independently developed iterative methods<ref name=GrayNeuhoff/><ref>{{cite journal | last=Lloyd | first=S. | title=Least squares quantization in PCM | journal=IEEE Transactions on Information Theory
:<math> {\partial D \over\partial b_k} = 0 \Rightarrow b_k = {y_k + y_{k+1} \over 2} </math>,
which places each threshold at the midpoint between each pair of reconstruction values, and
Line 188:
which places each reconstruction value at the centroid (conditional expected value) of its associated classification interval.
[[Lloyd's algorithm|Lloyd's Method I algorithm]], originally described in 1957, can be generalized in a straightforward way for application to vector data. This generalization results in the [[Linde–Buzo–Gray algorithm|Linde–Buzo–Gray (LBG)]] or [[k-means]] classifier optimization methods. Moreover, the technique can be further generalized in a straightforward way to also include an entropy constraint for vector data.<ref name=ChouLookabaughGray>{{cite journal | last1=Chou | first1=P.A. | last2=Lookabaugh | first2=T. | last3=Gray | first3=R.M. |author-link3=Robert M. Gray| title=Entropy-constrained vector quantization | journal=IEEE Transactions on Acoustics, Speech, and Signal Processing
===Uniform quantization and the 6 dB/bit approximation===
Line 217:
* [[Discretization]]
* [[Discretization error]]
* [[Least count]]▼
* [[Posterization]]
* [[Pulse-code modulation]]
Line 238 ⟶ 239:
==Further reading==
* {{cite book |url=http://www.mit.bme.hu/books/quantization/ |title=Quantization noise in Digital Computation, Signal Processing, and Control |author1=Bernard Widrow |author2=István Kollár |date=2007 |publisher=Cambridge University Press |isbn=9780521886710}}
▲* [[Least count]]
{{DSP}}
|