Content deleted Content added
Artoria2e5 (talk | contribs) move Gardner and Magnasco to text |
IznoRepeat (talk | contribs) m add WP:TEMPLATECAT to remove from template; genfixes |
||
(13 intermediate revisions by 5 users not shown) | |||
Line 1:
{{more
[[Image:Reassigned spectrogral surface of bass pluck.png|thumb|400px|▼
The '''method of reassignment''' is a technique for sharpening a [[time-frequency representation]] by mapping the data to time-frequency coordinates that are nearer to the true [[Support (mathematics)|region of support]] of the analyzed signal. The method has been independently introduced by several parties under various names, including ''method of reassignment'', ''remapping'', ''time-frequency reassignment'', and ''modified moving-window method''.<ref name="hainsworth">{{Cite thesis |type=PhD |chapter=Chapter 3: Reassignment methods |title=Techniques for the Automated Analysis of Musical Audio |last=Hainsworth |first=Stephen |year=2003 |publisher=University of Cambridge |citeseerx=10.1.1.5.9579 }}</ref> In▼
Reassigned spectral surface for the onset of an acoustic bass tone having a sharp pluck and a fundamental frequency of approximately 73.4 Hz. Sharp spectral ridges representing the harmonics are evident, as is the abrupt onset of the tone. The spectrogram was computed using a 65.7 ms Kaiser window with a shaping parameter of 12.]]▼
▲The '''method of reassignment''' is a technique for sharpening a [[time-frequency representation]] (e.g. [[spectrogram]] or the [[short-time Fourier transform]]) by mapping the data to time-frequency coordinates that are nearer to the true [[Support (mathematics)|region of support]] of the analyzed signal. The method has been independently introduced by several parties under various names, including ''method of reassignment'', ''remapping'', ''time-frequency reassignment'', and ''modified moving-window method''.<ref name="hainsworth">{{Cite thesis |type=PhD |chapter=Chapter 3: Reassignment methods |title=Techniques for the Automated Analysis of Musical Audio |last=Hainsworth |first=Stephen |year=2003 |publisher=University of Cambridge |citeseerx=10.1.1.5.9579 }}</ref>
== Introduction ==
▲[[Image:Reassigned spectrogral surface of bass pluck.png|thumb|400px|
▲Reassigned spectral surface for the onset of an acoustic bass tone having a sharp pluck and a fundamental frequency of approximately 73.4 Hz. Sharp spectral ridges representing the harmonics are evident, as is the abrupt onset of the tone. The spectrogram was computed using a 65.7 ms Kaiser window with a shaping parameter of 12.]]
Many signals of interest have a distribution of energy that varies in time and frequency. For example, any sound signal having a beginning or an end has an energy distribution that varies in time, and most sounds exhibit considerable variation in both time and frequency over their duration. Time-frequency representations are commonly used to analyze or characterize such signals. They map the one-dimensional time-___domain signal into a two-dimensional function of time and frequency. A time-frequency representation describes the variation of spectral energy distribution over time, much as a musical score describes the variation of musical pitch over time.
In audio signal analysis, the spectrogram is the most commonly used time-frequency representation, probably because it is well understood, and immune to so-called "cross-terms" that sometimes make other time-frequency representations difficult to interpret. But the windowing operation required in spectrogram computation introduces an unsavory tradeoff between time resolution and frequency resolution, so spectrograms provide a time-frequency representation that is blurred in time, in frequency, or in both dimensions. The method of time-frequency reassignment is a technique for refocussing time-frequency data in a blurred representation like the spectrogram by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal.<ref name="improving" />
== The spectrogram as a time-frequency representation ==
{{main|Spectrogram}}
One of the best-known time-frequency representations is the spectrogram, defined as the squared magnitude of the short-time Fourier transform. Though the short-time phase spectrum is known to contain important temporal information about the signal, this information is difficult to interpret, so typically, only the short-time magnitude spectrum is considered in short-time spectral analysis.<ref name="improving"/>
As a time-frequency representation, the spectrogram has relatively poor resolution. Time and frequency resolution are governed by the choice of analysis window and greater concentration in one ___domain is accompanied by greater smearing in the other.<ref name="improving"/>
A time-frequency representation having improved resolution, relative to the spectrogram, is the [[Wigner–Ville distribution]], which may be interpreted as a short-time Fourier transform with a window function that is perfectly matched to the signal. The Wigner–Ville distribution is highly concentrated in time and frequency, but it is also highly nonlinear and non-local. Consequently, this
distribution is very sensitive to noise, and generates cross-components that often mask the components of interest, making it difficult to extract useful information concerning the distribution of energy in multi-component signals.<ref name="improving"/>
[[Cohen's class distribution function|Cohen's class]] of bilinear time-frequency representations is a class of "smoothed" Wigner–Ville distributions, employing a smoothing kernel that can reduce sensitivity of the distribution to noise and suppresses cross-components, at the expense of smearing the distribution in time and frequency. This smearing causes the distribution to be non-zero in regions where the true Wigner–Ville distribution shows no energy.<ref name="improving"/>
The spectrogram is a member of Cohen's class. It is a smoothed Wigner–Ville distribution with the smoothing kernel equal to the Wigner–Ville distribution of the analysis window. The method of reassignment smooths the Wigner–Ville distribution, but then refocuses the distribution back to the true regions of support of the signal components. The method has been shown to reduce time and frequency smearing of any member of Cohen's class.<ref name="improving">
{{cite journal |author1=F. Auger |author2=P. Flandrin |name-list-style=amp |date=May 1995 |title=Improving the readability of time-frequency and time-scale representations by the reassignment method |journal=IEEE Transactions on Signal Processing |volume=43 |issue=5 |pages=1068–1089 |doi=10.1109/78.382394 |bibcode=1995ITSP...43.1068A |citeseerx=10.1.1.646.794 |s2cid=6336685 }}</ref><ref>P. Flandrin, F. Auger, and E. Chassande-Mottin, ▼
▲{{cite journal |author1=F. Auger |author2=P. Flandrin |name-list-style=amp |date=May 1995 |title=Improving the readability of time-frequency and time-scale representations by the reassignment method |journal=IEEE Transactions on Signal Processing |volume=43 |issue=5 |pages=1068–1089 |doi=10.1109/78.382394 |bibcode=1995ITSP...43.1068A |citeseerx=10.1.1.646.794 |s2cid=6336685 }}
''Time-frequency reassignment: From principles to algorithms'',
in Applications in Time-Frequency Signal Processing
Line 40 ⟶ 35:
== The method of reassignment ==
Pioneering work on the method of reassignment was published by Kodera, Gendrin, and de Villedary under the name of ''Modified Moving Window Method''.<ref name=Kodera>{{cite journal |author1=K. Kodera |author2=R. Gendrin |author3=C. de Villedary |name-list-style=amp |date=Feb 1978 |title=Analysis of time-varying signals with small BT values |journal=IEEE Transactions on Acoustics, Speech, and Signal Processing |volume=26 |issue=1 |pages=64–76 |doi=10.1109/TASSP.1978.1163047 }}</ref> Their technique enhances the resolution in time and frequency of the classical Moving Window Method (equivalent to the spectrogram) by assigning to each data point a new time-frequency coordinate that better-reflects the distribution of energy in the analyzed signal.<ref name=Kodera/>{{rp|67}}
In the classical moving window method, a time-___domain signal, <math>x(t)</math> is decomposed into a set of coefficients, <math>\epsilon( t, \omega )</math>, based on a set of elementary signals, <math>h_{\omega}(t)</math>, defined<ref name=Kodera/>{{rp|73}}<!-- far from the same notation as Kodera p73, but the same thing. -->
:<math>h_{\omega}(t) = h(t) e^{j \omega t} </math>
Line 56 ⟶ 51:
\end{align}</math>
where <math>M_{t}(\omega)</math> is the magnitude, and <math>\phi_{\tau}(\omega)</math> the phase, of <math>X_{t}(\omega)</math>, the Fourier transform of the signal <math>x(t)</math> shifted in time by <math>t</math> and windowed by <math>h(t)</math>.<ref name=Fitz09>{{cite arXiv |last1=Fitz |first1=Kelly R. |last2=Fulop |first2=Sean A. |title=A Unified Theory of Time-Frequency Reassignment |date=2009 |class=cs.SD |eprint=0903.3080 }} – this preprint manuscript is written by a previous contributor to this Wikipedia article; see [[Special:Diff/239438445|their contribution]].</ref>{{rp|4}}
<math>x(t)</math> can be reconstructed from the moving window coefficients by<ref name=Fitz09/>{{rp|8}}
:<math>\begin{align}
Line 67 ⟶ 62:
\end{align}</math>
For signals having magnitude spectra, <math>M(t,\omega)</math>, whose time variation is slow relative to the phase variation, the maximum contribution to the reconstruction integral comes from the vicinity of the point <math>t,\omega</math> satisfying the phase stationarity condition<ref name=Kodera/>{{rp|74}}
:<math>\begin{align}
Line 74 ⟶ 69:
\end{align}</math>
or equivalently, around the point <math>\hat{t}, \hat{\omega}</math> defined by<ref name=Kodera/>{{rp|74}}
:<math>\begin{align}
Line 81 ⟶ 76:
\end{align}</math>
This phenomenon is known in such fields as optics as the [[stationary phase approximation|principle of stationary phase]], which states that for periodic or quasi-periodic signals, the variation of the Fourier phase spectrum not attributable to periodic oscillation is slow with respect to time in the vicinity of the frequency of oscillation, and in surrounding regions the variation is relatively rapid. Analogously, for impulsive signals, that are concentrated in time, the variation of the phase spectrum is slow with respect to frequency near the time of the impulse, and in surrounding regions the variation is relatively rapid.<ref name=Kodera/>{{rp|73}}
In reconstruction, positive and negative contributions to the synthesized waveform cancel, due to destructive interference, in frequency regions of rapid phase variation. Only regions of slow phase variation (stationary phase) will contribute significantly to the reconstruction, and the maximum contribution (center of gravity) occurs at the point where the phase is changing most slowly with respect to time and frequency.<ref name=Kodera/>{{rp|71}}
The time-frequency coordinates thus computed are equal to the local group delay, <math>\hat{t}_{g}(t,\omega),</math> and local instantaneous frequency, <math>\hat{\omega}_{i}(t,\omega),</math> and are computed from the phase of the short-time Fourier transform, which is normally ignored when constructing the spectrogram. These quantities are ''local'' in the sense that they represent a windowed and filtered signal that is localized in time and frequency, and are not global properties of the signal under analysis.<ref name=Kodera/>{{rp|70}}
The modified moving window method, or method of reassignment, changes (reassigns) the point of attribution of <math>\epsilon(t,\omega)</math> to this point of maximum contribution <math>\hat{t}(t,\omega), \hat{\omega}(t,\omega)</math>, rather than to the point <math>t,\omega</math> at which it is computed. This point is sometimes called the ''center of gravity'' of the distribution, by way of analogy to a mass distribution. This analogy is a useful reminder that the attribution of spectral energy to the center of gravity of its distribution only makes sense when there is energy to attribute, so the method of reassignment has no meaning at points where the spectrogram is zero-valued.<ref name="improving" />
== Efficient computation of reassigned times and frequencies ==
Line 102 ⟶ 97:
For sufficiently small values of <math>\Delta t</math> and <math>\Delta \omega,</math> and provided that the phase difference is appropriately "unwrapped", this finite-difference method yields good approximations to the partial derivatives of phase, because in regions of the spectrum in which the evolution of the phase is dominated by rotation due to sinusoidal oscillation of a single, nearby component, the phase is a linear function.
Independently of Kodera ''et al.'', Nelson arrived at a similar method for improving the time-frequency precision of short-time spectral data
spectrum.<ref name
Auger and Flandrin showed that the method of reassignment, proposed in the context of the spectrogram by Kodera et al., could be extended to any member of [[Cohen's class]] of time-frequency representations by generalizing the reassignment operations to
Line 113 ⟶ 108:
where <math>W_{x}(t,\omega)</math> is the Wigner–Ville distribution of <math>x(t)</math>, and <math>\Phi(t,\omega)</math> is the kernel function that defines the distribution. They further described an efficient method for computing the times and frequencies for the reassigned spectrogram efficiently and accurately without explicitly computing the partial derivatives of
phase.<ref name
In the case of the spectrogram, the reassignment operations can be computed by
Line 130 ⟶ 125:
==Separability==
The short-time Fourier transform can often be used to estimate the amplitudes and phases of the individual components in a ''multi-component'' signal, such as a quasi-harmonic musical instrument tone. Moreover, the time and frequency reassignment operations can be used to sharpen the representation by attributing the spectral energy reported by the short-time Fourier transform to the point that is the local center of gravity of the complex energy distribution.<ref>K. Fitz, L. Haken, On the use of time-frequency reassignment in additve sound modeling, Journal of the Audio Engineering Society 50 (11) (2002) 879 – 893.</ref>
For a signal consisting of a single component, the instantaneous frequency can be estimated from the partial derivatives of phase of any short-time Fourier transform channel that passes the component. If the signal is to be decomposed into many components,
Line 142 ⟶ 137:
then the instantaneous frequency of each individual component can be computed from the phase of the response of a filter that passes that component, provided that no more than one component lies in the passband of the filter.
This is the property, in the frequency ___domain, that Nelson called ''separability''<ref name
If the components of a signal are separable in frequency with respect to a particular short-time spectral analysis window, then the output of each short-time Fourier transform filter is a filtered version of, at most, a single dominant (having significant energy) component, and so the derivative, with respect to time, of the phase of the <math>X(t,\omega_0)</math> is equal to the derivative with respect to time, of the phase of the dominant component at <math>\omega_0.</math> Therefore, if a component, <math>x_n(t),</math> having instantaneous frequency <math>\omega_{n}(t)</math> is the dominant component in the vicinity of <math>\omega_0,</math> then the instantaneous frequency of that component can be computed from the phase of the short-time Fourier transform evaluated at <math>\omega_0.</math> That is,
Line 150 ⟶ 145:
&= \frac{\partial }{\partial t} \arg\{ X(t,\omega_{0}) \}
\end{align}</math>
[[Image:Long-window reassigned spectrogram of speech.png|thumb|400px|Long-window reassigned spectrogram of the word "open", computed using a 54.4 ms Kaiser window with a shaping ▼
[[Image:Short-window reassigned spectrogram of speech.png|thumb|400px|Short-window reassigned spectrogram of the word "open", computed using a 13.6 ms Kaiser window with a shaping ▼
Just as each bandpass filter in the short-time Fourier transform filterbank may pass at most a single complex exponential component, two temporal events must be sufficiently separated in time that they do not lie in the same windowed segment of the input signal. This is the property of separability in the time ___domain, and is equivalent to requiring that the time between two events be
greater than the length of the impulse response of the short-time Fourier transform filters, the span of non-zero samples in <math>h(t).</math>
<gallery mode=packed heights=300px>
In general, there is an infinite number of equally valid decompositions for a multi-component signal. The separability property must be considered in the context of the desired decomposition. For example, in the analysis of a speech signal, an analysis window that is long relative to the time between glottal pulses is sufficient to separate harmonics, but the individual glottal pulses will be smeared, because many pulses are covered by each window (that is, the individual pulses are not separable, in time, by the chosen analysis window). An analysis window that is much shorter than the time between glottal pulses may resolve the glottal pulses, because no window spans more than one pulse, but the harmonic frequencies are smeared together, because the main lobe of the analysis window spectrum is wider than the spacing between the harmonics (that is, the harmonics are not separable, in frequency, by the chosen analysis window).▼
▲
▲
</gallery>
▲In general, there is an infinite number of equally valid decompositions for a multi-component signal. The separability property must be considered in the context of the desired decomposition. For example, in the analysis of a speech signal, an analysis window that is long relative to the time between glottal pulses is sufficient to separate harmonics, but the individual glottal pulses will be smeared, because many pulses are covered by each window (that is, the individual pulses are not separable, in time, by the chosen analysis window). An analysis window that is much shorter than the time between glottal pulses may resolve the glottal pulses, because no window spans more than one pulse, but the harmonic frequencies are smeared together, because the main lobe of the analysis window spectrum is wider than the spacing between the harmonics (that is, the harmonics are not separable, in frequency, by the chosen analysis window).<ref name="crossspectral"/>{{rp|2585}}
== Extensions ==
=== Consensus complex reassignment ===
Gardner and Magnasco (2006) argues that the [[auditory nerve]]s may use a form of the reassignment method to process sounds. These nerves are known for preserving timing (phase) information better than they do for magnitudes. The authors come up with a variation of reassignment with complex values (i.e. both phase and magnitude) and show that it produces sparse outputs like auditory nerves do. By running this reassignment with windows of different bandwidths (see discussion in the section above), a "consensus" that captures multiple kinds of signals is found, again like the auditory system. They argue that the algorithm is simple enough for neurons to implement.<ref name=Gar06>{{cite journal |last1=Gardner |first1=Timothy J. |last2=Magnasco |first2=Marcelo O. |title=Sparse time-frequency representations |journal=Proceedings of the National Academy of Sciences |date=18 April 2006 |volume=103 |issue=16 |pages=6094–6099 |doi=10.1073/pnas.0601707103|doi-access=free |pmid=16601097 |pmc=1431718 |bibcode=2006PNAS..103.6094G }}</ref>▼
===
{{empty section|date=January 2024}}
▲Gardner and Magnasco (2006) argues that the [[auditory nerve]]s may use a form of the reassignment method to process sounds. These nerves are known for preserving timing (phase) information better than they do for magnitudes. The authors come up with a variation of reassignment with complex values (i.e. both phase and magnitude) and show that it produces sparse outputs like auditory nerves do. By running this reassignment with windows of different bandwidths (see discussion in the section above), a "consensus" that captures multiple kinds of signals is found, again like the auditory system. They argue that the algorithm is simple enough for neurons to implement.<ref name=Gar06>{{cite journal |last1=Gardner |first1=Timothy J. |last2=Magnasco |first2=Marcelo O. |title=Sparse time-frequency representations |journal=Proceedings of the National Academy of Sciences |date=18 April 2006 |volume=103 |issue=16 |pages=6094–6099 |doi=10.1073/pnas.0601707103}}</ref>
<ref name=Meignen19>{{cite journal |last1=Meignen |first1=Sylvain |last2=Oberlin |first2=Thomas |last3=Pham |first3=Duong-Hung |title=Synchrosqueezing transforms: From low- to high-frequency modulations and perspectives |journal=Comptes Rendus Physique |date=July 2019 |volume=20 |issue=5 |pages=449–460 |doi=10.1016/j.crhy.2019.07.001|bibcode=2019CRPhy..20..449M }}</ref>
== References ==
{{
== Further reading ==
Line 182 ⟶ 183:
[[Category:Time–frequency analysis]]
[[Category:Transforms]]
[[Category:Data compression]]
|