Content deleted Content added
Logicdavid (talk | contribs) |
Owen Reich (talk | contribs) Link suggestions feature: 2 links added. |
||
(72 intermediate revisions by 43 users not shown) | |||
Line 1:
{{Short description|Fourier transform of the probability density function}}
[[File:Sinc simple.svg|thumb|280px|right|The characteristic function of a uniform ''U''(–1,1) random variable. This function is real-valued because it corresponds to a random variable that is symmetric around the origin; however characteristic functions may generally be complex-valued.]]
In [[probability theory]] and [[statistics]], the '''characteristic function''' of any [[real-valued]] [[random variable]] completely defines its [[probability distribution]]. If a random variable admits a [[probability density function]], then the characteristic function is the [[Fourier transform]] (with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with [[probability density function]]s or [[cumulative distribution function]]s. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
In addition to [[univariate distribution]]s, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases.
Line 8 ⟶ 9:
== Introduction ==
The characteristic function
The '''characteristic function''',
: <math> \varphi_X(t) = \operatorname{E} \left [ e^{itX} \right ],</math>
a function of {{mvar|t}},
determines the behavior and properties of the probability distribution of {{mvar|X}}.
It is equivalent to a [[probability density function]] or [[cumulative distribution function]], since knowing one of these functions allows computation of the others, but they provide different insights into the features of the random variable. In particular cases, one or another of these equivalent functions may be easier to represent in terms of simple standard functions.
If a random variable admits a [[probability density function|density function]], then the characteristic function is its [[Duality (mathematics)|Fourier dual]], in the sense that each of them is a [[Fourier transform]] of the other. If a random variable has a [[moment-generating function]] <math>M_X(t)</math>, then the ___domain of the characteristic function can be extended to the complex plane, and
: <math> \varphi_X(-it) = M_X(t). </math>
Note however that the characteristic function of a distribution
The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the [[Central Limit Theorem]] uses characteristic functions and [[Lévy's continuity theorem]]. Another important application is to the theory of the [[Indecomposable distribution|decomposability]] of random variables.
== Definition ==
For a scalar random variable
:<math>\begin{cases} \displaystyle \varphi_X\!:\mathbb{R}\to\mathbb{C} \\ \displaystyle \varphi_X(t) = \operatorname{E}\left[e^{itX}\right] = \int_{\mathbb{R}} e^{itx}\,dF_X(x) = \int_{\mathbb{R}} e^{itx} f_X(x)\,dx = \int_0^1 e^{it Q_X(p)}\,dp \end{cases}</math>
Here {{math|''F<sub>X</sub>''}} is the [[cumulative distribution function]] of {{mvar|X}}, {{math|''f<sub>X</sub>''}} is the corresponding [[probability density function]], {{math|''Q<sub>X</sub>''(''p'')}} is the corresponding inverse cumulative distribution function also called the [[quantile function]],<ref>{{Cite arXiv |eprint=0903.1592 |class=q-fin.CP |first1=W. T. |last1=Shaw |first2=J. |last2=McCabe |title=Monte Carlo sampling given a Characteristic Function: Quantile Mechanics in Momentum Space |year=2009}}</ref> and the
== {{anchor|CF Generalizations}} Generalizations ==
The notion of characteristic functions generalizes to multivariate random variables and more complicated [[random element]]s. The argument of the characteristic function will always belong to the [[continuous dual]] of the space where the random variable
* If
* If {{mvar|X}} is a [[complex random variable]], then for {{math|''t'' ∈ '''C'''}}{{sfnp|Andersen|Højbjerre|Sørensen|Eriksen|1995|loc=Definition 1.10}} <math display="block">\varphi_X(t) = \operatorname{E}\left[\exp\left( i \operatorname{Re}\left(\overline{t}X\right) \right)\right], </math> where <math display="inline">\overline t</math> is the [[complex conjugate]] of <math display="inline">t</math> and <math display="inline"> \operatorname{Re}(z)</math> is the [[real part]] of the [[complex number]] <math display="inline"> z </math>,
* If {{mvar|X}} is a {{mvar|k}}-dimensional [[complex random vector]], then for {{math|''t'' ∈ '''C'''<sup>''k''</sup>}} {{sfnp|Andersen|Højbjerre|Sørensen|Eriksen|1995|loc=Definition 1.20}} <math display="block"> \varphi_X(t) = \operatorname{E}\left[\exp(i\operatorname{Re}(t^*\!X))\right], </math> where <math display="inline"> t^* </math> is the [[conjugate transpose]] of the vector <math display="inline"> t</math>,
* If {{math|''X''(''s'')}} is a [[stochastic process]], then for all functions {{math|''t''(''s'')}} such that the integral <math display="inline"> \int_{\mathbb R} t(s)X(s)\,\mathrm{d}s </math> converges for almost all realizations of {{mvar|X}}{{sfnp|Sobczyk|2001|p=20}} <math display="block">\varphi_X(t) = \operatorname{E}\left[\exp \left ( i\int_\mathbf{R} t(s)X(s) \, ds \right ) \right]. </math>
== Examples ==
Line 57 ⟶ 44:
|-
! Distribution
! Characteristic function
|-
| [[Degenerate distribution|Degenerate]] {{math|''δ''<sub>''a''</sub>}}
| <math>e^{ita}</math>
|-
| [[Bernoulli distribution|Bernoulli]] {{math|Bern(''p'')}}
| <math>1-p+pe^{it}</math>
|-
| [[Binomial distribution|Binomial]] {{math|B(''n, p'')}}
| <math>(1-p+pe^{it})^n</math>
|-
| [[Negative binomial distribution|Negative binomial]] {{math|NB(''r, p'')}}
| <math>\left(\frac{
|-
| [[Poisson distribution|Poisson]] {{math|Pois(''λ'')}}
| <math>e^{\lambda(e^{it}-1)}</math>
|-
| [[Uniform distribution (continuous)|Uniform (continuous)]] {{math|U(''a, b'')}}
| <math>\frac{e^{itb} - e^{ita}}{it(b-a)}</math>
|-
|[[Discrete uniform distribution|Uniform (discrete)]] {{math|DU(''a, b'')}}
| <math>\frac{e^{
|-
| [[Laplace distribution|Laplace]] {{math|L(''μ'', ''b'')}}
| <math>\frac{e^{it\mu}}{1 + b^2t^2}</math>
|-
| [[
| <math>e^{i\mu t}\frac{\pi s t}{\sinh(\pi s t)}</math>
|-
| [[Normal distribution|Normal]] {{math|''N''(''μ'', ''σ''<sup>2</sup>)}}
| <math>e^{it\mu - \frac{1}{2}\sigma^2t^2}</math>
|-
| [[Chi-squared distribution|Chi-squared]] {{math|1=''χ''<sup>2</sup><sub style="position:relative;left:-5pt;top:2pt">''k''</sub>}}
| <math>(1 - 2it)^{-k/2}</math>
|-
| [[
| <math>e^{\frac{i\lambda t}{1-2it}}(1 - 2it)^{-k/2}</math>
|-
| [[Generalized chi-squared distribution|Generalized chi-squared]] <math>\tilde{\chi}(\boldsymbol{w}, \boldsymbol{k}, \boldsymbol{\lambda},s,m)</math>
| <math>\frac{\exp\left[it \left( m + \sum_j \frac{w_j \lambda_j}{1-2i w_j t} \right)-\frac{s^2 t^2}{2}\right]}{\prod_j \left(1-2i w_j t \right)^{k_j/2}}</math>
|-
| [[Cauchy distribution|Cauchy]] {{math|C(''μ'', ''θ'')}}
| <math>e^{it\mu -\theta|t|}</math>
|-
| [[Gamma distribution|Gamma]] {{math|Γ(''k'', ''θ'')}}
| <math>(1 - it\theta)^{-k}</math>
|-
| [[Exponential distribution|Exponential]] {{math|Exp(''λ'')}}
| <math>(1 - it\lambda^{-1})^{-1}</math>
|-
| [[Geometric distribution|Geometric]] {{math|Gf(''p'')}}<br />(number of failures)
| <math>\frac{p}{1-e^{it}(1-p)}</math>
|-
| [[Geometric distribution|Geometric]] {{math|Gt(''p'')}}<br />(number of trials)
| <math>\frac{p}{e^{-it}-(1-p)}</math>
|-
| [[Multivariate normal distribution|Multivariate normal]] {{math|''N''('''''μ''''', '''''Σ''''')}}
| <math>e^{i{ \mathbf{t}^{\mathrm{T}}
|-
| [[Multivariate Cauchy distribution|Multivariate Cauchy]]
| <math>e^{i\mathbf{t}^{\mathrm{T}}\boldsymbol\mu - \sqrt{\mathbf{t}^{\mathrm{T}}\boldsymbol{\Sigma} \mathbf{t}}}</math>
|-
Line 115 ⟶ 111:
== Properties ==
* The characteristic function of a real-valued random variable always exists, since it is an integral of a bounded continuous function over a space whose [[measure (mathematics)|measure]] is finite.
* A characteristic function is [[Uniform continuity|uniformly continuous]] on the entire space.
* It is non-vanishing in a region around zero: {{math|1=''φ''(0) = 1}}.
* It is bounded: {{math|{{abs|''φ''(''t'')
* It is [[Hermitian function|Hermitian]]: {{
* There is a [[bijection]] between [[probability distribution]]s and characteristic functions. That is, for any two random variables {{math|''X''<sub>1</sub>}}, {{math|''X''<sub>2</sub>}}, both have the same probability distribution if and only if <math> \varphi_{X_1}=\varphi_{X_2}</math>. {{Citation needed|reason=proof ?|date=October 2023}}
* If a random variable
* If a characteristic function {{math|''φ''<sub>''X''</sub>}} has a {{mvar|k}}-th derivative at zero, then the random variable {{mvar|X}} has all moments up to {{mvar|k}} if {{mvar|k}} is even, but only up to {{math|''k'' – 1}} if {{mvar|k}} is odd.{{sfnp|Lukacs|1970|loc=Corollary 1 to Theorem 2.3.1}} <math display="block"> \varphi_X^{(k)}(0) = i^k \operatorname{E}[X^k] </math>
* If {{math|''X''<sub>1</sub>, ..., ''X<sub>n</sub>''}} are independent random variables, and {{math|''a''<sub>1</sub>, ..., ''a<sub>n</sub>''}} are some constants, then the characteristic function of the linear combination of the {{math|''X''<sub>''i''</sub>}} variables is <math display="block">\varphi_{a_1X_1+\cdots+a_nX_n}(t) = \varphi_{X_1}(a_1t)\cdots \varphi_{X_n}(a_nt).</math> One specific case is the sum of two independent random variables {{math|''X''<sub>1</sub>}} and {{math|''X''<sub>2</sub>}} in which case one has <math display="block">\varphi_{X_1+X_2}(t) = \varphi_{X_1}(t)\cdot\varphi_{X_2}(t).</math>
* Let <math>X</math> and <math>Y</math> be two random variables with characteristic functions <math>\varphi_{X}</math> and <math>\varphi_{Y}</math>. <math>X</math> and <math>Y</math> are independent if and only if <math>\varphi_{X, Y}(s, t)= \varphi_{X}(s) \varphi_{Y}(t) \quad \text { for all } \quad(s, t) \in \mathbb{R}^{2}</math>.
* The tail behavior of the characteristic function determines the [[smoothness (probability theory)|smoothness]] of the corresponding density function.
* Let the random variable <math>Y = aX + b</math> be the linear transformation of a random variable <math>X</math>. The characteristic function of <math>Y</math> is <math>\varphi_Y(t)=e^{itb}\varphi_X(at)</math>. For random vectors <math>X</math> and <math>Y = AX + B</math> (where
=== Continuity ===
The bijection stated above between probability distributions and characteristic functions is ''sequentially continuous''. That is, whenever a sequence of distribution functions {{math|''F<sub>j</sub>''(''x'')
: '''[[Lévy’s continuity theorem]]:''' A sequence {{math|''X<sub>j</sub>''}} of
This theorem can be used to prove the [[Law of large numbers#Proof using convergence of characteristic functions|law of large numbers]] and the [[Central limit theorem#Proof|central limit theorem]].
=== Inversion
There is a [[Bijection|one-to-one correspondence]] between cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to compute
'''Theorem'''. If the characteristic function {{math|''φ<sub>X</sub>''}} of a random variable
<math display="block"> f_X(x) = F_X'(x) = \frac{1}{2\pi}\int_{\mathbf{R}} e^{-itx}\varphi_X(t)\,dt.</math>
In the multivariate case it is
<math display="block"> f_X(x) = \frac{1}{(2\pi)^n} \int_{\mathbf{R}^n} e^{-i(t\cdot x)}\varphi_X(t)\lambda(dt)</math>
where <math display="inline"> t\cdot x</math> is the [[dot product]].
The density function is the [[Radon–Nikodym derivative]] of the distribution {{math|''μ<sub>X</sub>''}} with respect to the [[Lebesgue measure]] {{mvar|λ}}:
<math display="block"> f_X(x) = \frac{d\mu_X}{d\lambda}(x). </math>
'''Theorem (Lévy)'''.{{NoteTag|named after the French mathematician [[Paul Lévy (mathematician)|Paul Lévy]]}} If {{math|''φ''<sub>''X''</sub>}} is characteristic function of distribution function {{math|''F<sub>X</sub>''}}, two points {{math|''a'' < ''b''}} are such that {{math|{{mset|''x'' {{!}} ''a'' < ''x'' < ''b''}}}} is a [[continuity set]] of {{math|''μ''<sub>''X''</sub>}} (in the univariate case this condition is equivalent to continuity of {{math|''F<sub>X</sub>''}} at points {{mvar|a}} and {{mvar|b}}), then
* If {{mvar|X}} is scalar: <math display="block">F_X(b) - F_X(a) = \frac{1} {2\pi} \lim_{T \to \infty} \int_{-T}^{+T} \frac{e^{-ita} - e^{-itb}} {it}\, \varphi_X(t)\, dt.</math> This formula can be re-stated in a form more convenient for numerical computation as{{sfnp|Shephard|1991a}} <math display="block"> \frac{F(x+h) - F(x-h)}{2h} = \frac{1}{2\pi} \int_{-\infty}^{\infty} \frac{\sin ht}{ht} e^{-itx} \varphi_X(t) \, dt .</math> For a random variable bounded from below one can obtain <math>F(b)</math> by taking <math>a</math> such that <math>F(a)=0.</math> Otherwise, if a random variable is not bounded from below, the limit for <math>a\to-\infty</math> gives <math>F(b)</math>, but is numerically impractical.{{sfnp|Shephard|1991a}}
* If {{mvar|X}} is a vector random variable: <math display="block">\mu_X\big(\{a<x<b\}\big) = \frac{1}{(2\pi)^n} \lim_{T_1\to\infty}\cdots\lim_{T_n\to\infty} \int\limits_{-T_1\leq t_1\leq T_1} \cdots \int\limits_{-T_n \leq t_n \leq T_n} \prod_{k=1}^n\left(\frac{e^{-i t_k a_k}-e^{-i t_k b_k}}{it_k}\right)\varphi_X(t)\lambda(dt_1 \times \cdots \times dt_n)</math>
'''Theorem'''. If
* If {{mvar|X}} is scalar: <math display="block">F_X(a) - F_X(a-0) = \lim_{T\to\infty}\frac{1}{2T} \int_{-T}^{+T} e^{-ita}\varphi_X(t)\,dt</math>
* If {{mvar|X}} is a vector random variable:
'''Theorem (Gil-Pelaez)'''.
: <math>F_X(x) = \frac{1}{2} - \frac{1}{\pi}\int_0^\infty \frac{\operatorname{Im}[e^{-itx}\varphi_X(t)]}{t}\,dt
where the imaginary part of a complex number <math>z</math> is given by <math>\mathrm{Im}(z) = (z - z^*)/2i</math>.
And its density function is:
: <math>f_X(x) = \frac{1}{\pi}\int_0^\infty \operatorname{Re}[e^{-itx}\varphi_X(t)]\,dt</math>
The integral may be not [[Lebesgue-integrable]]; for example, when {{mvar|X}} is the [[discrete random variable]] that is always 0, it becomes the [[Dirichlet integral]].
Inversion formulas for multivariate distributions are available.
=== Criteria for characteristic functions ===
The set of all characteristic functions is closed under certain operations:
*A [[convex combination|convex linear combination]] <math display="inline"> \sum_n a_n\varphi_n(t)</math> (with <math display="inline"> a_n\geq0,\ \sum_n a_n=1</math>) of a finite or a countable number of characteristic functions is also a characteristic function.
* The product of a finite number of characteristic functions is also a characteristic function. The same holds for an [[infinite product]] provided that it converges to a function continuous at the origin.
*If
It is well known that any non-decreasing [[càdlàg]] function
'''[[Bochner's theorem|Bochner’s theorem]]'''. An arbitrary function {{math|''φ'' : '''R'''<sup>''n''</sup> → '''C'''}} is the characteristic function of some random variable if and only if
'''Khinchine’s criterion'''. A complex-valued, absolutely continuous function
: <math>\varphi(t) = \int_{\mathbf{R}} g(t+\theta)\overline{g(\theta)} \, d\theta .</math>
'''Mathias’ theorem'''. A real-valued, even, continuous, absolutely integrable function
:<math>(-1)^n \left ( \int_{\mathbf{R}} \varphi(pt)e^{-t^2/2} H_{2n}(t) \, dt \right ) \geq 0</math>
for {{math|1=''n'' = 0,1,2,...}}, and all {{math|''p'' > 0}}. Here {{math|''H''<sub>2''n''</sub>}} denotes the [[Hermite polynomials|Hermite polynomial]] of degree {{math|2''n''}}.
[[File:2 cfs coincide over a finite interval.svg|thumb|250px|Pólya’s theorem can be used to construct an example of two random variables whose characteristic functions coincide over a finite interval but are different elsewhere.]]
Line 202 ⟶ 184:
* <math> \varphi </math> is [[convex function|convex]] for <math> t>0 </math>,
* <math> \varphi(\infty) = 0 </math>,
then {{math|''φ''(''t'')}} is the characteristic function of an absolutely continuous distribution symmetric about 0.
== Uses ==
Line 208 ⟶ 190:
=== Basic manipulations of distributions ===
Characteristic functions are particularly useful for dealing with linear functions of [[statistical independence|independent]] random variables. For example, if {{
:<math>S_n = \sum_{i=1}^n a_i X_i,\,\!</math>
where the {{math|''a''<sub>''i''</sub>}} are constants, then the characteristic function for {{math|''S''<sub>''n''</sub>}} is given by
:<math>\varphi_{S_n}(t)=\varphi_{X_1}(a_1t)\varphi_{X_2}(a_2t)\cdots \varphi_{X_n}(a_nt) \,\!</math>
In particular, {{
: <math>\varphi_{X+Y}(t)= \operatorname{E}\left [e^{it(X+Y)}\right]= \operatorname{E}\left [e^{itX}e^{itY}\right] = \operatorname{E}\left [e^{itX}\right] \operatorname{E}\left [e^{itY}\right] =\varphi_X(t) \varphi_Y(t)</math>
The independence of
Another special case of interest for identically distributed random variables is when {{
: <math>\varphi_{\overline{X}}(t)= \varphi_X\!\left(\tfrac{t}{n} \right)^n</math>
=== Moments ===
Characteristic functions can also be used to find [[moment (mathematics)|moments]] of a random variable. Provided that the
<math display=block>
\operatorname{E}\left[ X^n\right] = i^{-n}\left[\frac{d^n}{dt^n}\varphi_X(t)\right]_{t=0} = i^{-n}\varphi_X^{(n)}(0) ,\!</math>
This can be formally written using the derivatives of the [[Dirac delta function]]:<math display="block">f_X(x) = \sum_{n=0}^\infty \frac{(-1)^n}{n!}\delta^{(n)}(x)\operatorname{E}[X^n]
</math>which allows a formal solution to the [[moment problem]].
For example, suppose {{mvar|X}} has a standard [[Cauchy distribution]]. Then {{math|''φ<sub>X</sub>''(''t'') {{=}} ''e''<sup>−{{!}}''t''{{!}}</sup>}}. This is not [[Differentiable function|differentiable]] at {{math|1=''t'' = 0}}, showing that the Cauchy distribution has no [[expected value|expectation]]. Also, the characteristic function of the sample mean {{math|{{overline|''X''}}}} of {{mvar|n}} [[Statistical independence|independent]] observations has characteristic function {{math|''φ''<sub>{{overline|''X''}}</sub>(''t'') {{=}} (''e''<sup>−{{!}}''t''{{!}}/''n''</sup>)<sup>''n''</sup> {{=}} ''e''<sup>−{{!}}''t''{{!}}</sup>}}, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.
As a further example, suppose
:<math>\operatorname{E}\left[ X\right] = i^{-1} \left[\frac{d
A similar calculation shows <math> \operatorname{E}\left[ X^2\right] = \mu^2 + \sigma^2 </math> and is easier to carry out than applying the definition of expectation and using integration by parts to evaluate <math> \operatorname{E}\left[ X^2\right] </math>.
Line 243 ⟶ 227:
=== Data analysis ===
Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the [[stable distribution]] since closed form expressions for the density are not available which makes implementation of [[maximum likelihood]] estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the [[empirical characteristic function]], calculated from the data. Paulson et al. (1975)
=== Example ===
The [[gamma distribution]] with scale parameter θ and a shape parameter
: <math>(1 - \theta
Now suppose that we have
: <math> X ~\sim \Gamma(k_1,\theta) \mbox{ and } Y \sim \Gamma(k_2,\theta)
with
: <math>\varphi_X(t)=(1 - \theta
which by independence and the basic properties of characteristic function leads to
: <math>\varphi_{X+Y}(t)=\varphi_X(t)\varphi_Y(t)=(1 - \theta
This is the characteristic function of the gamma distribution scale parameter
: <math>X+Y \sim \Gamma(k_1+k_2,\theta)
The result can be expanded to
: <math>\forall i \in \{1,\ldots, n\} : X_i \sim \Gamma(k_i,\theta) \qquad \Rightarrow \qquad \sum_{i=1}^n X_i \sim \Gamma\left(\sum_{i=1}^nk_i,\theta\right).</math>
== Entire characteristic functions ==
{{Expand section|date=December 2009}}
As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by [[
== Related concepts ==
Related concepts include the [[moment-generating function]] and the [[probability-generating function]]. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.
The characteristic function is closely related to the [[Fourier transform]]: the characteristic function of a probability density function {{math|''p''(''x'')}} is the [[complex conjugate]] of the [[continuous Fourier transform]] of {{math|''p''(''x'')}} (according to the usual convention; see [[Continuous Fourier transform#Other conventions|continuous Fourier transform – other conventions]]).
: <math>\varphi_X(t) = \langle e^{itX} \rangle = \int_{\mathbf{R}} e^{itx}p(x)\, dx = \overline{\left( \int_{\mathbf{R}} e^{-itx}p(x)\, dx \right)} = \overline{P(t)},</math>
where {{math|''P''(''t'')}} denotes the [[continuous Fourier transform]] of the probability density function {{math|''p''(''x'')}}. Likewise, {{math|''p''(''x'')}} may be recovered from {{math|''φ<sub>X</sub>''(''t'')}} through the inverse Fourier transform:
:<math>p(x) = \frac{1}{2\pi} \int_{\mathbf{R}} e^{itx} P(t)\, dt = \frac{1}{2\pi} \int_{\mathbf{R}} e^{itx} \overline{\varphi_X(t)}\, dt.</math>
Line 281 ⟶ 265:
Below was lifted from [[generating function]] ... there should be an analog for the characteristic function
*Suppose that
::<math>G_{S_N}(z) = G_N(G_X(z)).</math>
Line 301 ⟶ 285:
=== Sources ===
{{refbegin}}
* {{Cite book |title=Linear and graphical models for the multivariate complex normal distribution |publisher=Springer-Verlag |year=1995 |isbn=978-0-387-94521-7 |series=Lecture Notes in Statistics 101 |___location=New York |last1=Andersen |first1=H.H. |first2=M. |last2=Højbjerre |first3=D. |last3=Sørensen |first4=P.S. |last4=Eriksen}}
* {{Cite book |last=Billingsley |first=Patrick |title=Probability and measure |publisher=John Wiley & Sons |year=1995 |isbn=978-0-471-00710-4 |edition=3rd}}
* {{Cite book |last1=Bisgaard |first1=T. M. |title=Characteristic functions and moment sequences |last2=Sasvári |first2=Z. |publisher=Nova Science |year=2000}}
* {{Cite book |last=Bochner |first=Salomon |title=Harmonic analysis and the theory of probability |publisher=University of California Press |year=1955}}
* {{Cite book |last=Cuppens |first=R. |url=https://archive.org/details/decompositionofm00cupp |title=Decomposition of multivariate probabilities |publisher=Academic Press |year=1975 |isbn=9780121994501 |url-access=registration}}
* {{Cite journal |last=Heathcote |first=C.R. |year=1977 |title=The integrated squared error estimation of parameters |journal=[[Biometrika]] |volume=64 |issue=2 |pages=255–264 |doi=10.1093/biomet/64.2.255}}
* {{Cite book |last=Lukacs |first=E. |title=Characteristic functions |publisher=Griffin |year=1970 |___location=London}}
* {{Cite book |last1=Kotz |first1=Samuel |title=Multivariate T Distributions and Their Applications |last2=Nadarajah |first2=Saralees |publisher=Cambridge University Press |year=2004 }}
* {{cite book |last1=Manolakis |first1=Dimitris G. |last2=Ingle |first2=Vinay K. |last3=Kogon |first3=Stephen M. |title=Statistical and Adaptive Signal Processing: Spectral Estimation, Signal Modeling, Adaptive Filtering, and Array Processing |date=2005 |publisher=Artech House |isbn=978-1-58053-610-3 |url=https://books.google.com/books?id=3RQfAQAAIAAJ |ref={{sfnref|Statistical and Adaptive Signal Processing|2005}} |language=en}}
* {{cite book |last1=Oberhettinger |first1=Fritz |title=Fourier transforms of distributions and their inverses; a collection of tables. |date=1973 |publisher=Academic Press |___location=New York |isbn=9780125236508}}
* {{Cite journal |last1=Paulson |first1=A.S. |last2=Holcomb |first2=E.W. |last3=Leitch |first3=R.A. |year=1975 |title=The estimation of the parameters of the stable laws |journal=[[Biometrika]] |volume=62 |issue=1 |pages=163–170 |doi=10.1093/biomet/62.1.163}}
* {{Cite book |last=Pinsky |first=Mark |title=Introduction to Fourier analysis and wavelets |publisher=Brooks/Cole |year=2002 |isbn=978-0-534-37660-4}}
* {{Cite book |last=Sobczyk |first=Kazimierz |title=Stochastic differential equations |publisher=[[Kluwer Academic Publishers]] |year=2001 |isbn=978-1-4020-0345-5}}
* {{Cite journal |last=Wendel |first=J.G. |year=1961 |title=The non-absolute convergence of Gil-Pelaez' inversion integral |journal=The Annals of Mathematical Statistics |volume=32 |issue=1 |pages=338–339 |doi=10.1214/aoms/1177705164 |doi-access=free}}
* {{Cite journal |last=Yu |first=J. |year=2004 |title=Empirical characteristic function estimation and its applications |journal=Econometric Reviews |volume=23 |issue=2 |pages=93–1223 |doi=10.1081/ETC-120039605 |s2cid=9076760|url=https://ink.library.smu.edu.sg/context/soe_research/article/1357/viewcontent/SSRN_id553701.pdf }}
* {{Cite journal |last=Shephard |first=N. G. |year=1991a |title=From characteristic function to distribution function: A simple framework for the theory |url=https://ora.ox.ac.uk/objects/uuid:a4c3ad11-74fe-458c-8d58-6f74511a476c |journal=Econometric Theory |volume=7 |issue=4 |pages=519–529 |doi=10.1017/s0266466600004746 |s2cid=14668369}}
* {{Cite journal |last=Shephard |first=N. G. |year=1991b |title=Numerical integration rules for multivariate inversions |url=https://ora.ox.ac.uk/objects/uuid:da00666a-4790-4666-a54c-b81fc6fc49cb |journal= Journal of Statistical Computation and Simulation|volume=39 |issue=1–2 |pages=37–46 |doi=10.1080/00949659108811337}}
* {{Cite conference |last1=Ansari |first1=Abdul Fatir |last2=Scarlett |first2=Jonathan |last3=Soh |first3=Harold |year=2020 |title=A Characteristic Function Approach to Deep Implicit Generative Modeling |url=https://openaccess.thecvf.com/content_CVPR_2020/html/Ansari_A_Characteristic_Function_Approach_to_Deep_Implicit_Generative_Modeling_CVPR_2020_paper.html |pages=7478–7487 |book-title=Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020}}
* {{Cite conference |last1=Li |first1=Shengxi |last2=Yu |first2=Zeyang |last3=Xiang |first3=Min |last4=Mandic |first4=Danilo |year=2020 |title=Reciprocal Adversarial Learning via Characteristic Functions |url=https://proceedings.neurips.cc/paper/2020/hash/021f6dd88a11ca489936ae770e4634ad-Abstract.html |book-title=Advances in Neural Information Processing Systems 33 (NeurIPS 2020)}}
{{refend}}
|