Quantization (signal processing): Difference between revisions

Content deleted Content added
mNo edit summary
Target articles mostly irrelevant at least 24-bit. Maybe better alternative, link just first. No other (major) change.
Line 8:
The difference between an input value and its quantized value (such as [[round-off error]]) is referred to as '''quantization error'''. A device or [[algorithm function|algorithmic function]] that performs quantization is called a '''quantizer'''. An [[analog-to-digital converter]] is an example of a quantizer.
 
== Example ==
For example, [[Rounding#Round half up|rounding]] a [[real number]] <math>x</math> to the nearest integer value forms a very basic type of quantizer – a ''uniform'' one. A typical (''mid-tread'') uniform quantizer with a quantization ''step size'' equal to some value <math>\Delta</math> can be expressed as
 
Line 36:
 
===Analog-to-digital converter===
An [[analog-to-digital converter]] (ADC) can be modeled as two processes: [[Sampling (signal processing)|sampling]] and quantization. Sampling converts a time-varying voltage signal into a [[discrete-time signal]], a sequence of real numbers. Quantization replaces each real number with an approximation from a finite set of discrete values. Most commonly, these discrete values are represented as fixed-point words. Though any number of quantization levels is possible, common word-lengths are [[audio bit depth|8-bit]] (256 levels), [[16-bit]] (65,536 levels) and [[24-bit computing|24-bit]] (16.8&nbsp;million levels). Quantizing a sequence of numbers produces a sequence of quantization errors which is sometimes modeled as an additive random signal called '''quantization noise''' because of its [[stochastic]] behavior. The more levels a quantizer uses, the lower is its quantization noise power.
 
===Rate–distortion optimization===
''[[Rate–distortion theory|Rate–distortion optimized]]'' quantization is encountered in [[source coding]] for lossy data compression algorithms, where the purpose is to manage distortion within the limits of the [[bit rate]] supported by a communication channel or storage medium. The analysis of quantization in this context involves studying the amount of data (typically measured in digits or bits or bit ''rate'') that is used to represent the output of the quantizer, and studying the loss of precision that is introduced by the quantization process (which is referred to as the ''distortion'').
 
=== Mid-riser and mid-tread uniform quantizers ===
Most uniform quantizers for signed input data can be classified as being of one of two types: ''mid-riser'' and ''mid-tread''. The terminology is based on what happens in the region around the value 0, and uses the analogy of viewing the input-output function of the quantizer as a [[stairway]]. Mid-tread quantizers have a zero-valued reconstruction level (corresponding to a ''tread'' of a stairway), while mid-riser quantizers have a zero-valued classification threshold (corresponding to a ''[[Stair riser|riser]]'' of a stairway).<ref name=Gersho77>{{cite journal | last=Gersho | first=A. |author-link=Allen Gersho| title=Quantization | journal=IEEE Communications Society Magazine | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=15 | issue=5 | year=1977 | issn=0148-9615 | doi=10.1109/mcom.1977.1089500 | pages=16–28}}</ref>
 
Line 57:
In general, a mid-riser or mid-tread quantizer may not actually be a ''uniform'' quantizer – i.e., the size of the quantizer's classification [[interval (mathematics)|intervals]] may not all be the same, or the spacing between its possible output values may not all be the same. The distinguishing characteristic of a mid-riser quantizer is that it has a classification threshold value that is exactly zero, and the distinguishing characteristic of a mid-tread quantizer is that is it has a reconstruction value that is exactly zero.<ref name=Gersho77/>
 
=== Dead-zone quantizers ===
A '''dead-zone quantizer''' is a type of mid-tread quantizer with symmetric behavior around 0. The region around the zero output value of such a quantizer is referred to as the ''dead zone'' or ''[[deadband]]''. The dead zone can sometimes serve the same purpose as a [[noise gate]] or [[squelch]] function. Especially for compression applications, the dead-zone may be given a different width than that for the other steps. For an otherwise-uniform quantizer, the dead-zone width can be set to any value <math>w</math> by using the forward quantization rule<ref>{{cite book| first1=Majid |last1=Rabbani |first2=Rajan L. |last2=Joshi |first3=Paul W. |last3=Jones |editor1-first=Peter |editor1-last=Schelkens |editor2-first=Athanassios |editor2-last=Skodras |editor3-first=Touradj |editor3-last=Ebrahimi |title=The JPEG 2000 Suite | url=https://archive.org/details/jpegsuitethewile00sche | url-access=limited |publisher=[[John Wiley & Sons]] |date=2009 |isbn=978-0-470-72147-6 |chapter=Section 1.2.3: Quantization, in Chapter 1: JPEG 2000 Core Coding System (Part 1) |pages=[https://archive.org/details/jpegsuitethewile00sche/page/n73 22]–24}}</ref><ref>{{cite book| first1=David S. |last1=Taubman |first2=Michael W. |last2=Marcellin |title=JPEG2000: Image Compression Fundamentals, Standards and Practice | url=https://archive.org/details/jpegimagecompres00taub | url-access=limited |publisher=[[Kluwer Academic Publishers]] |date=2002 |isbn=0-7923-7519-X |chapter=Chapter 3: Quantization |page=[https://archive.org/details/jpegimagecompres00taub/page/n126 107]}}</ref><ref name=SullivanIT/>
:<math>k = \sgn(x) \cdot \max\left(0, \left\lfloor \frac{\left| x \right|-w/2}{\Delta}+1\right\rfloor\right)</math>,
Line 121:
Often the design of a quantizer involves supporting only a limited range of possible output values and performing clipping to limit the output to this range whenever the input exceeds the supported range. The error introduced by this clipping is referred to as ''overload'' distortion. Within the extreme limits of the supported range, the amount of spacing between the selectable output values of a quantizer is referred to as its ''granularity'', and the error introduced by this spacing is referred to as ''granular'' distortion. It is common for the design of a quantizer to involve determining the proper balance between granular distortion and overload distortion. For a given supported number of possible output values, reducing the average granular distortion may involve increasing the average overload distortion, and vice versa. A technique for controlling the amplitude of the signal (or, equivalently, the quantization step size <math>\Delta</math>) to achieve the appropriate balance is the use of ''[[automatic gain control]]'' (AGC). However, in some quantizer designs, the concepts of granular error and overload error may not apply (e.g., for a quantizer with a limited range of input data or with a countably infinite set of selectable output values).<ref name=GrayNeuhoff/>
 
=== Rate–distortion quantizer design ===
A scalar quantizer, which performs a quantization operation, can ordinarily be decomposed into two stages:
;Classification
Line 167:
In some designs, rather than optimizing for a particular number of classification regions <math>M</math>, the quantizer design problem may include optimization of the value of <math>M</math> as well. For some probabilistic source models, the best performance may be achieved when <math>M</math> approaches infinity.
 
=== Neglecting the entropy constraint: Lloyd–Max quantization ===
 
In the above formulation, if the bit rate constraint is neglected by setting <math>\lambda</math> equal to 0, or equivalently if it is assumed that a fixed-length code (FLC) will be used to represent the quantized data instead of a [[variable-length code]] (or some other entropy coding technology such as arithmetic coding that is better than an FLC in the rate–distortion sense), the optimization problem reduces to minimization of distortion <math>D</math> alone.
 
Line 184 ⟶ 183:
[[Lloyd's algorithm|Lloyd's Method I algorithm]], originally described in 1957, can be generalized in a straightforward way for application to vector data. This generalization results in the [[Linde–Buzo–Gray algorithm|Linde–Buzo–Gray (LBG)]] or [[k-means clustering|k-means]] classifier optimization methods. Moreover, the technique can be further generalized in a straightforward way to also include an entropy constraint for vector data.<ref name=ChouLookabaughGray>{{cite journal | last=Chou | first=P.A. | last2=Lookabaugh | first2=T. | last3=Gray | first3=R.M. |author-link3=Robert M. Gray| title=Entropy-constrained vector quantization | journal=IEEE Transactions on Acoustics, Speech, and Signal Processing | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=37 | issue=1 | year=1989 | issn=0096-3518 | doi=10.1109/29.17498 | pages=31–42}}</ref>
 
=== Uniform quantization and the 6&nbsp;dB/bit approximation ===
 
The Lloyd–Max quantizer is actually a uniform quantizer when the input PDF is uniformly distributed over the range <math>[y_1-\Delta/2,~y_M+\Delta/2)</math>. However, for a source that does not have a uniform distribution, the minimum-distortion quantizer may not be a uniform quantizer. The analysis of a uniform quantizer applied to a uniformly distributed source can be summarized in what follows:
 
Line 202 ⟶ 200:
<!-- I don't think that was proved by anyone else before it was done by Gish & Pearce in '68. For example, was it done by Koshelev in '63? (I don't think so) Zador in '66? (I don't know - probably not) Goblick & Holsinger in '67? (I don't see it in that paper.) -->
 
== In other fields ==
{{seeSee also|Quantum noise|Quantum limit}}
Many physical quantities are actually quantized by physical entities. Examples of fields where this limitation applies include [[electronics]] (due to [[electrons]]), [[optics]] (due to [[photons]]), [[biology]] (due to [[DNA]]), [[physics]] (due to [[Planck limits]]) and [[chemistry]] (due to [[molecules]]).
 
Line 221 ⟶ 219:
{{Notelist}}
 
== References ==
{{Reflist}}
{{refbegin}}