Content deleted Content added
I undid the revision and cancel the links, because I didn't advertise any work from the other and this article is not related to myself, thank you. |
m v2.05b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation) |
||
(48 intermediate revisions by 23 users not shown) | |||
Line 1:
{{Short description|Techniques and methods in signal processing}}
{{See also|Time–frequency representation}}
In [[signal processing]], '''time–frequency analysis''' comprises those techniques that study a signal in both the time and frequency domains
The mathematical motivation for this study is that functions and their transform representation are
The practical motivation for time–frequency analysis is that classical [[Fourier analysis]] assumes that signals are infinite in time or periodic, while many signals in practice are of short duration, and change substantially over their duration. For example, traditional musical instruments do not produce infinite duration sinusoids, but instead begin with an attack, then gradually decay. This is poorly represented by traditional methods, which motivates time–frequency analysis.
One of the most basic forms of time–frequency analysis is the [[short-time Fourier transform]] (STFT), but more sophisticated techniques have been developed, notably [[wavelet]]s and [[least-squares spectral analysis]] methods for unevenly spaced data.
==Motivation==
Line 29 ⟶ 28:
<!--
Image with unknown copyright status removed: [[Image:ft_vs_gt.jpg]] -->
[[File:X1(t).jpg|thumb]]
Once such a representation has been generated other techniques in time–frequency analysis may then be applied to the signal in order to extract information from the signal, to separate the signal from noise or interfering signals, etc.
Line 56 ⟶ 57:
#'''Lower computational complexity''' to ensure the time needed to represent and process a signal on a time–frequency plane allows real-time implementations.
Below is a brief comparison of some selected time–frequency distribution functions.<ref>{{Cite journal|
{| class="wikitable"
Line 92 ⟶ 93:
To analyze the signals well, choosing an appropriate time–frequency distribution function is important. Which time–frequency distribution function should be used depends on the application being considered, as shown by reviewing a list of applications.<ref>A. Papandreou-Suppappola, Applications in Time–Frequency Signal Processing (CRC Press, Boca Raton, Fla., 2002)</ref> The high clarity of the Wigner distribution function (WDF) obtained for some signals is due to the auto-correlation function inherent in its formulation; however, the latter also causes the cross-term problem. Therefore, if we want to analyze a single-term signal, using the WDF may be the best approach; if the signal is composed of multiple components, some other methods like the Gabor transform, Gabor-Wigner distribution or Modified B-Distribution functions may be better choices.
As an illustration, magnitudes from non-localized Fourier analysis cannot distinguish the signals:
: <math>x_1 (t)=\begin{cases}
Line 105 ⟶ 106:
\cos(3 \pi t); & t > 20
\end{cases}</math>
[[File:X1-x2.jpg|thumb]]
But time–frequency analysis can.
== TF analysis and random processes<ref>{{Cite book |last=Ding |first=Jian-Jiun |title=Time frequency analysis and wavelet transform class notes |publisher=Graduate Institute of Communication Engineering, National Taiwan University (NTU) |year=2022 |___location=Taipei, Taiwan}}</ref> ==
For a random process x(t), we cannot find the explicit value of x(t).
The value of x(t) is expressed as a probability function.
=== General random processes ===
* Auto-covariance function (ACF) <math>R_x(t,\tau)</math>
:<math>R_x(t,\tau) = E[x(t+\tau/2)x^*(t-\tau/2)]</math>
:In usual, we suppose that <math>E[x(t)] = 0 </math> for any t,
:
:<math>=\iint x(t+\tau/2,\xi_1)x^*(t-\tau/2,\xi_2)P(\xi_1,\xi_2)d\xi_1d\xi_2</math>
:(alternative definition of the auto-covariance function)
:<math>\overset{\land}{R_x}(t,\tau)=E[x(t)x(t+\tau)]</math>
* Power spectral density (PSD) <math>S_x(t,f)</math>
:<math>S_x(t,f) = \int_{-\infty}^{\infty} R_x(t,\tau)e^{-j2\pi f\tau}d\tau</math>
* Relation between the [[Wigner distribution function|WDF (Wigner Distribution Function)]] and the PSD
:<math>E[W_x(t,f)] = \int_{-\infty}^{\infty} E[x(t+\tau/2)x^*(t-\tau/2)]\cdot e^{-j2\pi f\tau}\cdot d\tau</math>
:::::<math>= \int_{-\infty}^{\infty} R_x(t,\tau)\cdot e^{-j2\pi f\tau}\cdot d\tau</math><math>= S_x(t,f)</math>
* Relation between the [[ambiguity function]] and the ACF
:<math>E[A_X(\eta,\tau)] = \int_{-\infty}^{\infty} E[x(t+\tau/2)x^*(t-\tau/2)]e^{-j2\pi t\eta}dt</math>
:::::<math>= \int_{-\infty}^{\infty} R_x(t,\tau)e^{-j2\pi t\eta}dt</math>
=== Stationary random processes ===
* [[Stationary process|Stationary random process]]: the statistical properties do not change with t. Its auto-covariance function:
<math>R_x(t_1,\tau) = R_x(t_2,\tau) = R_x(\tau)</math> for any <math>t</math>, Therefore,
<math>R_x(\tau) = E[x(\tau/2)x^*(-\tau/2)]</math>
<math>=\iint x(\tau/2,\xi_1)x^*(-\tau/2,\xi_2)P(\xi_1,\xi_2)d\xi_1d\xi_2</math>PSD,
<math>S_x(f) = \int_{-\infty}^{\infty} R_x(\tau)e^{-j2\pi f\tau}d\tau</math> White noise:
<math>S_x(f) = \sigma</math> , where <math>\sigma</math> is some constant.[[File:Stationary random process's WDF and AF.jpg|right|frameless|440x440px]]
* When x(t) is stationary,
<math>E[W_x(t,f)] = S_x(f)</math> , (invariant with <math>t</math>)
<math>E[A_x(\eta,\tau)] = \int_{-\infty}^{\infty} R_x(\tau)\cdot e^{-j2\pi t\eta}\cdot dt</math>
<math>= R_x(\tau)\int_{-\infty}^{\infty} e^{-j2\pi t\eta}\cdot dt</math><math>= R_x(\tau)\delta(\eta)</math> , (nonzero only when <math>\eta = 0</math>)
=== Additive white noise ===
* For additive white noise (AWN),
:<math>E[W_g(t,f)] = \sigma</math>
:<math>E[A_x(\eta,\tau)] = \sigma\delta(\tau)\delta(\eta)</math>
* Filter Design for a signal in additive white noise
[[File:Filter design for white noise.jpg|left|thumb|440x440px]]
<math>E_x</math>: energy of the signal
<math>A</math> : area of the time frequency distribution of the signal
The PSD of the white noise is <math>S_n(f) = \sigma</math>
<math>SNR \approx 10\log_{10}\frac{E_x}{\iint\limits_{(t,f)\in\text{signal part}} S_x(t,f)dtdf}</math>
<math>SNR \approx 10\log_{10}\frac{E_x}{\sigma\Alpha}</math>
=== Non-stationary random processes ===
* If <math>E[W_x(t,f)]</math> varies with <math>t</math> and <math>E[A_x(\eta,\tau)]</math> is nonzero when <math>\eta = 0</math>, then <math>x(t)</math> is a non-stationary random process.
* If
*# <math>h(t) = x_1(t)+x_2(t)+x_3(t)+......+x_k(t)</math>
*# <math>x_n(t)</math>'s have zero mean for all <math>t</math>'s
*# <math>x_n(t)</math>'s are mutually independent for all <math>t</math>'s and <math>\tau</math>'s
:then:
::<math>E[x_m(t+\tau/2)x_n^*(t-\tau/2)] = E[x_m(t+\tau/2)]E[x_n^*(t-\tau/2)] = 0</math>
* if <math>m \neq n</math>, then
::<math>E[W_h(t,f)] = \sum_{n=1}^k E[W_{x_n}(t,f)]</math>
::<math>E[A_h(\eta,\tau)] = \sum_{n=1}^k E[A_{x_n}(\eta,\tau)]</math>
=== Short-time Fourier transform ===
* Random process for [[Short-time Fourier transform|STFT (Short Time Fourier Transform)]]
<math>E[x(t)]\neq 0</math> should be satisfied. Otherwise,
<math>E[X(t,f)] = E[\int_{t-B}^{t+B} x(\tau)w(t-\tau)e^{-j2\pi f\tau}d\tau]</math>
<math>=\int_{t-B}^{t+B} E[x(\tau)]w(t-\tau)e^{-j2\pi f\tau}d\tau</math>for zero-mean random process, <math>E[X(t,f)] = 0</math>
* Decompose by the AF and the FRFT. Any non-stationary random process can be expressed as a summation of the fractional Fourier transform (or chirp multiplication) of stationary random process.
==Applications==
The following applications need not only the time–frequency distribution functions but also some operations to the signal. The [[Linear canonical transform]] (LCT) is really helpful. By LCTs, the shape and ___location on the time–frequency plane of a signal can be in the arbitrary form that we want it to be. For example, the LCTs can shift the time–frequency distribution to any ___location, dilate it in the horizontal and vertical direction without changing its area on the plane, shear (or twist) it, and rotate it ([[Fractional Fourier transform]]). This powerful operation, LCT, make it more flexible to analyze and apply the time–frequency distributions. The time-frequency analysis have been applied in various applications like, disease detection from biomedical signals and images, vital sign extraction from physiological signals, brain-computer interface from brain signals, machinery fault diagnosis from vibration signals, interference mitigation in spread spectrum communication systems.<ref>{{cite book |last1=Pachori |first1=Ram Bilas |title=Time-Frequency Analysis Techniques and Their Applications |publisher=CRC Press|url=https://www.routledge.com/Time-Frequency-Analysis-Techniques-and-their-Applications/Pachori/p/book/9781032435763?srsltid=AfmBOorjwRC4cJ-ABXieBsYLfFSmwQdGQ3GHNvL_O5pGnBMchjM8x7S8}}</ref><ref>{{cite book |last1=Boashash |first1=Boualem |title=Time-Frequency Signal Analysis and Processing: A Comprehensive Reference |publisher=Elsevier|url=https://www.sciencedirect.com/book/9780123984999/time-frequency-signal-analysis-and-processing}}</ref>
===Instantaneous frequency estimation===
The definition of [[instantaneous frequency]] is the time rate of change of phase, or
: <math>\frac{1}{2 \pi} \frac{d}{dt} \phi (t), </math>
where <math>\phi (t)</math> is the [[instantaneous phase]] of a signal. We can know the instantaneous frequency from the time–frequency plane directly if the image is clear enough. Because the high clarity is critical, we often use WDF to analyze it.
===TF filtering and signal decomposition===
The goal of filter design is to remove the undesired component of a signal. Conventionally, we can just filter in the time ___domain or in the frequency ___domain individually as shown below.
[[Image:filter tf.jpg]]
The filtering methods mentioned above can’t work well for every signal which may overlap in the time ___domain or in the frequency ___domain. By using the time–frequency distribution function, we can filter in the Euclidean time–frequency ___domain or in the fractional ___domain by employing the [[fractional Fourier transform]]. An example is shown below.
[[Image:filter fractional.jpg]]
Filter design in time–frequency analysis always deals with signals composed of multiple components, so one cannot use WDF due to cross-term. The Gabor transform, Gabor–Wigner distribution function, or Cohen's class distribution function may be better choices.
The concept of signal decomposition relates to the need to separate one component from the others in a signal; this can be achieved through a filtering operation which require a filter design stage. Such filtering is traditionally done in the time ___domain or in the frequency ___domain; however, this may not be possible in the case of non-stationary signals that are multicomponent as such components could overlap in both the time ___domain and also in the frequency ___domain; as a consequence, the only possible way to achieve component separation and therefore a signal decomposition is to implement a time–frequency filter.
===Sampling theory===
By the [[Nyquist–Shannon sampling theorem]], we can conclude that the minimum number of sampling points without [[aliasing]] is equivalent to the area of the time–frequency distribution of a signal. (This is actually just an approximation, because the TF area of any signal is infinite.) Below is an example before and after we combine the sampling theory with the time–frequency distribution:
[[Image:sampling.jpg]]
It is noticeable that the number of sampling points decreases after we apply the time–frequency distribution.
When we use the WDF, there might be the cross-term problem (also called interference). On the other hand, using [[Gabor transform]] causes an improvement in the clarity and readability of the representation, therefore improving its interpretation and application to practical problems.
Consequently, when the signal we tend to sample is composed of single component, we use the WDF; however, if the signal consists of more than one component, using the Gabor transform, Gabor-Wigner distribution function, or other reduced interference TFDs may achieve better results.
The [[Balian–Low theorem]] formalizes this, and provides a bound on the minimum number of time–frequency samples needed.
===Modulation and multiplexing===
Line 244 ⟶ 239:
As illustrated in the upper example, using the WDF is not smart since the serious cross-term problem make it difficult to multiplex and modulate.
===Electromagnetic wave propagation===
Line 326 ⟶ 305:
===Optics, acoustics, and biomedicine===
[[Light]] is an electromagnetic wave, so time–frequency analysis applies to optics in the same way as for general electromagnetic wave propagation.
Similarly, it is a characteristic of acoustic signals, that their frequency components undergo abrupt variations in time and would hence be not well represented by a single frequency component analysis covering their entire durations.
As acoustic signals are used as speech in communication between the human-sender and -receiver, their undelayedly transmission in technical communication systems is crucial, which makes the use of simpler TFDs, such as the Gabor transform, suitable to analyze these signals in real-time by reducing computational complexity.
If frequency analysis speed is not a limitation, a detailed feature comparison with well defined criteria should be made before selecting a particular TFD. Another approach is to define a signal dependent TFD that is adapted to the data.
In biomedicine, one can use time–frequency distribution to analyze the [[electromyography]] (EMG), [[electroencephalography]] (EEG), [[electrocardiogram]] (ECG) or [[otoacoustic emissions]] (OAEs).
Line 339 ⟶ 324:
== See also ==
* [[
* [[Multiresolution analysis]]
* [[Spectral density estimation]]
* [[Time–frequency analysis for music
* [[Wavelet analysis]]
== References ==
|