Content deleted Content added
→Aliasing: add {{Numblk}} template |
fix anchor points Eq.1 and Eq.2 |
||
Line 43:
the [[Poisson summation formula]] indicates that the samples, <math>x(nT),</math> of <math>x(t)</math> are sufficient to create a [[periodic summation]] of <math>X(f).</math> The result is:
{{Equation box 1
|indent =
|<math>X_s(f)\ \triangleq \sum_{k=-\infty}^{\infty} X\left(f - k f_s\right) = \sum_{n=-\infty}^{\infty} T\cdot x(nT)\ e^{-i 2\pi n T f}</math>▼
|title=
▲|equation = {{NumBlk|:|<math>X_s(f)\ \triangleq \sum_{k=-\infty}^{\infty} X\left(f - k f_s\right) = \sum_{n=-\infty}^{\infty} T\cdot x(nT)\ e^{-i 2\pi n T f},</math>|{{EquationRef|Eq.1}}}}
|cellpadding= 6
|border=0
|border colour = white
|background colour=white}}
[[File:AliasedSpectrum.png|thumb|upright=1.8|right|<math>X(f)</math> (top blue) and <math>X_A(f)</math> (bottom blue) are continuous Fourier transforms of two {{em|different}} functions, <math>x(t)</math> and <math>x_A(t)</math> (not shown). When the functions are sampled at rate <math>f_s</math>, the images (green) are added to the original transforms (blue) when one examines the discrete-time Fourier transforms (DTFT) of the sequences. In this hypothetical example, the DTFTs are identical, which means {{em|the sampled sequences are identical}}, even though the original continuous pre-sampled functions are not. If these were audio signals, <math>x(t)</math> and <math>x_A(t)</math> might not sound the same. But their samples (taken at rate <math>f_s</math>) are identical and would lead to identical reproduced sounds; thus <math>x_A(t)</math> is an alias of <math>x(t)</math> at this sample rate.]]
Line 72 ⟶ 76:
<math display="block">H(f) = \mathrm{rect} \left(\frac{f}{f_s} \right) = \begin{cases}1 & |f| < \frac{f_s}{2} \\ 0 & |f| > \frac{f_s}{2}, \end{cases}</math>
where <math>\mathrm{rect}</math> is the [[rectangular function]]. Therefore:
:::<math> = \mathrm{rect}(Tf)\cdot \sum_{n=-\infty}^{\infty} T\cdot x(nT)\ e^{-i 2\pi n T f}</math> (from {{EquationNote|Eq.1}}, above).
▲X(f) &= \mathrm{rect} \left(\frac{f}{f_s} \right) \cdot X_s(f) \\
\mathcal{F}\left \{
\mathrm{sinc} \left( \frac{t - nT}{T} \right)
\right \}}.</math> {{efn-ua|group=bottom|The sinc function follows from rows 202 and 102 of the [[Table of Fourier transforms|transform tables]]}}
The inverse transform of both sides produces the [[Whittaker–Shannon interpolation formula]]:
:<math
which shows how the samples, <math>x(nT)</math>, can be combined to reconstruct <math>x(t)</math>.
Line 95 ⟶ 97:
Poisson shows that the Fourier series in {{EquationNote|Eq.1}} produces the periodic summation of <math>X(f)</math>, regardless of <math>f_s</math> and <math>B</math>. Shannon, however, only derives the series coefficients for the case <math>f_s=2B</math>. Virtually quoting Shannon's original paper:
▲Let <math>X(\omega)</math> be the spectrum of <math>x(t)</math>. Then
::<math
:because <math>X(\omega)</math> is assumed to be zero outside the band <math>\left|\tfrac{\omega}{2\pi}\right| < B.</math>
{{Equation box 1
{{anchor|math_Eq.2}}<math display="block">x \left(\tfrac{n}{2B} \right) = {1 \over 2\pi} \int_{-2\pi B}^{2\pi B} X(\omega) e^{i\omega {n \over {2B}}}\;{\rm d}\omega. \text{(Eq.2)}</math>▼
|indent =
|title=
▲|equation = {{
|cellpadding= 6
|border=0
|border colour = white
|background colour=white}}
:On the left are values of <math>x(t)</math> at the sampling points. The integral on the right will be recognized as essentially{{
efn|group=proof|Multiplying both sides of {{EquationNote|Eq.2}} by <math>T = 1/2B</math> produces, on the left, the scaled sample values <math>(T\cdot x(nT))</math> in Poisson's formula ({{EquationNote|Eq.1}}), and, on the right, the actual formula for Fourier expansion coefficients.
}} the ''n''<sup>th</sup> coefficient in a Fourier-series expansion of the function <math>X(\omega),</math>
Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via [[sinc function]]s, what we now call the [[Whittaker–Shannon interpolation formula]] as discussed above. He does not derive or prove the properties of the sinc function, as the Fourier pair relationship between the [[rectangular function|rect]] (the rectangular function) and sinc functions was well known by that time.<ref>{{cite book |last1=Campbell |first1=George |last2=Foster |first2=Ronald |title=Fourier Integrals for Practical Applications |date=1942 |publisher=Bell Telephone System Laboratories |___location=New York}}</ref>
Line 143 ⟶ 150:
==Critical frequency==
To illustrate the necessity of <math>f_s>2B,</math>
:<math
[[File:CriticalFrequencyAliasing.svg|thumb|right|A family of sinusoids at the critical frequency, all having the same sample sequences of alternating +1 and –1. That is, they all are aliases of each other, even though their frequency is not above half the sample rate.]]
With <math>f_s=2B</math> or equivalently <math>T=1/2B,</math>
:<math
{{em|regardless of the value of <math>\theta.</math>}}
==Sampling of non-baseband signals==
Line 164 ⟶ 171:
For example, in order to sample [[FM broadcasting|FM radio]] signals in the frequency range of 100–102 [[megahertz|MHz]], it is not necessary to sample at 204 MHz (twice the upper frequency), but rather it is sufficient to sample at 4 MHz (twice the width of the frequency interval).
A bandpass condition is that <math>X(f) = 0,</math>
:<math
for some nonnegative integer <math>N</math>. This formulation includes the normal baseband condition as the case <math>N=0.</math>
The corresponding interpolation function is the impulse response of an ideal brick-wall [[bandpass filter]] (as opposed to the ideal [[brick-wall filter|brick-wall]] [[lowpass filter]] used above) with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses:
Line 187 ⟶ 194:
The Nyquist–Shannon sampling theorem provides a [[necessary and sufficient condition|sufficient condition]] for the sampling and reconstruction of a band-limited signal. When reconstruction is done via the [[Whittaker–Shannon interpolation formula]], the Nyquist criterion is also a necessary condition to avoid aliasing, in the sense that if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed. However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be a [[necessary and sufficient condition|necessary condition]].
A non-trivial example of exploiting extra assumptions about the signal is given by the recent field of [[compressed sensing]], which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some ___domain. As an example, compressed sensing deals with signals that may have a low over-all bandwidth (say, the ''effective'' bandwidth <math>EB,</math>)
Another example where sub-Nyquist sampling is optimal arises under the additional constraint that the samples are quantized in an optimal manner, as in a combined system of sampling and optimal [[lossy compression]].<ref>{{cite journal|last1=Kipnis|first1=Alon|last2=Goldsmith|first2=Andrea J.|last3=Eldar|first3=Yonina C.|last4=Weissman|first4=Tsachy|title=Distortion rate function of sub-Nyquist sampled Gaussian sources|journal=IEEE Transactions on Information Theory|date=January 2016|volume=62|pages=401–429|doi=10.1109/tit.2015.2485271|arxiv=1405.5329|s2cid=47085927 }}</ref> This setting is relevant in cases where the joint effect of sampling and [[Quantization (signal processing)|quantization]] is to be considered, and can provide a lower bound for the minimal reconstruction error that can be attained in sampling and quantizing a [[random signal]]. For stationary Gaussian random signals, this lower bound is usually attained at a sub-Nyquist sampling rate, indicating that sub-Nyquist sampling is optimal for this signal model under optimal [[Quantization (signal processing)|quantization]].<ref>{{cite journal |last1=Kipnis |first1=Alon |last2=Eldar |first2=Yonina |last3=Goldsmith |first3=Andrea |title=Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits |journal=IEEE Signal Processing Magazine |date=26 April 2018 |volume=35 |issue=3 |pages=16–39 |doi=10.1109/MSP.2017.2774249 |arxiv=1801.06718 |bibcode=2018ISPM...35c..16K |s2cid=13693437 }}</ref>
Line 199 ⟶ 206:
<math display="block">f(t) = \sum_{n=-\infty}^\infty X_n \frac{\sin \pi(2Wt - n)}{\pi(2Wt - n)},</math>
where <math>X_n = f\left(\frac n {2W} \right).</math>
It was not until these articles were published that the theorem known as "Shannon's sampling theorem" became common property among communication engineers, although Shannon himself writes that this is a fact which is common knowledge in the communication art.{{efn-ua|group=bottom|[[#refShannon49|Shannon 1949]], p. 448.}} A few lines further on, however, he adds: "but in spite of its evident importance, [it] seems not to have appeared explicitly in the literature of communication theory".
Line 229 ⟶ 236:
Exactly what "Nyquist's result" they are referring to remains mysterious.
When Shannon stated and proved the sampling theorem in his 1949 article, according to Meijering,<ref name="EM" /> "he referred to the critical sampling interval <math>T = \frac 1 {2W}</math> as the ''Nyquist interval'' corresponding to the band <math>W,</math>
Similarly, Nyquist's name was attached to ''[[Nyquist rate]]'' in 1953 by [[Harold Stephen Black|Harold S. Black]]:
|