Content deleted Content added
Dedhert.Jr (talk | contribs) tidy up |
Aadirulez8 (talk | contribs) m v2.05 - Autofix / Fix errors for CW project (Link equal to linktext) |
||
(8 intermediate revisions by 6 users not shown) | |||
Line 2:
{{Orphan|date=December 2023}}
'''Distributional data analysis''' is a branch of [[nonparametric statistics]] that is related to [[functional data analysis]]. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that although the space of probability distributions is
== Notation ==
Line 8:
Let <math>\mathcal{F}</math> be a space of distributions <math>\nu</math> and let <math>d</math> be a metric on <math>\mathcal{F}</math> so that <math>(\mathcal{F}, d)</math> forms a [[metric space]]. There are various metrics available for <math>d</math>.<ref>{{Cite book|last1=Deza|first1=M.M.|last2=Deza|first2=E.|title=Encyclopedia of distances|publisher=Springer|year=2013}}</ref>
For example, suppose <math>\nu_1, \; \nu_2 \in \mathcal{F}</math>, and let <math>f_1</math> and <math>f_2</math> be the density functions of <math>\nu_1</math> and <math>\nu_2</math>, respectively. The Fisher-Rao metric is defined as
<math display="block"> d_{FR}(f_1, f_2) = \arccos \left( \int_D \sqrt{f_1(x) f_2(x)} dx \right). </math> For univariate distributions, let <math>Q_1</math> and <math>Q_2</math> be the quantile functions of <math>\nu_1</math> and <math>\nu_2</math>. Denote the <math>L^p</math>-Wasserstein space as <math>\mathcal{W}_p</math>, which is the space of distributions with finite <math>p</math>-th moments. Then, for <math>\nu_1, \; \nu_2 \in \mathcal{W}_p</math>, the <math>L^p</math>-[[Wasserstein metric]] is defined as
<math display="block"> d_{W_p}(\nu_1, \nu_2) = \left( \int_0^1 [Q_1(s) - Q_2(s)]^p ds \right)^{1/p}. </math>
== Mean and variance ==
Line 31 ⟶ 32:
=== Functional principal component analysis ===
[[Functional principal component analysis
By the Karhunen-Loève theorem, <math>
Line 42 ⟶ 43:
=== Transformation FPCA ===
Assume the probability density functions <math>f</math> exist, and let <math>\mathcal{F}_f</math> be the space of density functions.
Transformation approaches introduce a continuous and invertible transformation <math>\Psi: \mathcal{F}_f \to \mathbb{H}</math>, where <math>\mathbb{H}</math> is a [[Hilbert space]] of functions. For instance, the log quantile density transformation or the centered log ratio transformation are popular choices.<ref>{{Cite journal|last1=Petersen|first1=A.|last2=Müller|first2=H.-G.|date=2016|title=Functional data analysis for density functions by transformation to a Hilbert space|journal=Annals of Statistics|volume=44|issue=1|pages=183–218|doi=10.1214/15-AOS1363|doi-access=free|arxiv=1601.02869}}</ref><ref>{{Cite journal|last1=van den Boogaart|first1=K.G.|last2=Egozcue|first2=J.J.|last3=Pawlowsky-Glahn|first3=V.|date=2014|title=Bayes Hilbert spaces|journal=Australian and New Zealand Journal of Statistics|volume=56|issue=2|pages=171–194|doi=10.1111/anzs.12074|s2cid=120612578 }}</ref>
For <math>f \in \mathcal{F}_f</math>, let <math>Y = \Psi(f)</math>, the transformed functional variable. The mean function <math>\mu_Y(t) = \mathbb{E}\left[Y(t)\right]</math> and the covariance function <math>G_Y(s,t) = \operatorname{Cov}(Y(s), Y(t))</math> are defined accordingly, and let <math>\{\lambda_j, \phi_j\}_{j=1}^\infty</math> be the eigenpairs of <math>G_Y(s,t)</math>. The Karhunen-Loève decomposition gives
<math>Y(t) = \mu_Y(t) + \sum_{j=1}^\infty \xi_j \phi_j(t)</math>, where <math>\xi_j = \int_D [Y(t) - \mu_Y(t)] \phi_j(t) dt</math>. Then, the <math>j</math>th transformation mode of variation is defined as<ref>{{Cite journal|last1=Petersen|first1=A.|last2=Müller|first2=H.-G.|date=2016|title=Functional data analysis for density functions by transformation to a Hilbert space|journal=Annals of Statistics|volume=44|issue=1|pages=183–218|doi=10.1214/15-AOS1363|doi-access=free|arxiv=1601.02869}}</ref>
<math>
g_{j}^{TF}(t, \alpha) = \Psi^{-1} \left( \mu_Y + \alpha \sqrt{\lambda_j}\phi_j \right)(t), \quad t \in D, \; \alpha \in [-A, A].
Line 62 ⟶ 63:
</math>
Let the reference measure <math>\nu_0</math> be the Wasserstein mean <math>\mu_\oplus</math>.
Then, a ''principal geodesic subspace (PGS)'' of dimension <math>k</math> with respect to <math>\mu_\oplus</math> is a set <math>G_k = \operatorname{argmin}_{G \in \text{CG}_{\nu_\oplus, k}(\mathcal{W}_2)} K_{W_2}(G)</math>.<ref name="gpca1">{{Cite journal|last1=Bigot|first1=J.|last2=Gouet|first2=R.|last3=Klein|first3=T.|last4=López|first4=A.|date=2017|title=Geodesic PCA in the Wasserstein space by convex PCA|journal= Annales de l'
Note that the tangent space <math>T_{\mu_\oplus}</math> is a subspace of <math>L^2_{\mu_\oplus}</math>, the Hilbert space of <math>{\mu_\oplus}</math>-square-integrable functions. Obtaining the PGS is equivalent to performing PCA in <math>L^2_{\mu_\oplus}</math> under constraints to lie in the convex and closed subset.<ref name="gpca2"/> Therefore, a simple approximation of the Wasserstein Geodesic PCA is the Log FPCA by relaxing the geodesicity constraint, while alternative techniques are suggested.<ref name="gpca1"/><ref name="gpca2"/>
Line 68 ⟶ 69:
== Distributional regression ==
=== Fréchet regression ===
Fréchet regression is a generalization of regression with responses taking values in a metric space and Euclidean predictors.<ref name="freg">{{Cite journal|last1=Petersen|first1=A.|last2=Müller|first2=H.-G.|date=2019|title=Fréchet regression for random objects with Euclidean predictors|journal=Annals of Statistics|volume=47|issue=2|pages=691–719|doi=10.1214/17-AOS1624 |doi-access=free|arxiv=1608.03012}}</ref><ref name="review">{{Cite journal|last1=Petersen|first1=A.|last2=Zhang|first2=C.|last3=Kokoszka|first3=P.|date=2022|title=Modeling probability density functions as data objects|journal=Econometrics and Statistics|volume=21|pages=159–178|doi=10.1016/j.ecosta.2021.04.004 |s2cid=236589040 }}</ref> Using the Wasserstein metric <math>d_{W_2}</math>, Fréchet regression models can be applied to distributional objects. The global Wasserstein-Fréchet regression model is defined as
{{NumBlk|::|<math display="block">\begin{align}
m_\oplus (x) &= \operatorname{argmin}_{\omega \in \mathcal{F}} \mathbb{E}\left[ s_G(X,x) d_{W_2}^2(\nu,\omega) \right], \\
Line 92 ⟶ 93:
\Gamma g(t) = \langle \beta(\cdot, t),g \rangle_{\omega{\oplus}}, \; t \in D, \; g \in T_{\omega{\oplus}}, \; \beta:D^2 \to \R.</math>
Estimation of the regression operator is based on empirical estimators obtained from samples.<ref>{{Cite journal|last1=Chen|first1=Y.|last2=Lin|first2=Z.|last3=Müller|first3=H.-G.|date=2023|title=Wasserstein regression|journal=Journal of the American Statistical Association|volume=118|issue=542|pages=869–882|doi=10.1080/01621459.2021.1956937 |s2cid=219721275 }}</ref>
Also, the Fisher-Rao metric <math>d_{FR}</math> can be used in a similar fashion.<ref name="review"/><ref name="dai2022">{{Cite journal|last1=Dai|first1=X.|date=2022|title=Statistical inference on the Hilbert sphere with application to random densities|journal=Electronic Journal of Statistics|volume=16|issue=1|pages=700–736|doi=10.1214/21-EJS1942 |doi-access=free|arxiv=2101.00527}}</ref>
== Hypothesis testing ==
Line 143 ⟶ 144:
</math>
On the other hand, the spherical autoregressive model (SAR) considers the Fisher-Rao metric.<ref>{{Cite journal|last1=Zhu|first1=C.|last2=Müller|first2=H.-G.|date=2023|title=Spherical autoregressive models, with application to distributional and compositional time series|journal=Journal of Econometrics|volume=239 |issue=2 |doi=10.1016/j.jeconom.2022.12.008 |doi-access=free|arxiv=2203.12783}}</ref> Following the settings of [[##Tests for the intrinsic mean]], let <math>x_t \in \mathcal{X}</math> with Fréchet mean <math>\mu_x</math>. Let <math>\theta = \arccos(\langle x_t, \mu_x \rangle )</math>, which is the geodesic distance between <math>x_t</math> and <math>\mu_x</math>. Define a rotation operator <math>Q_{x_t, \mu_x}</math> that rotates <math>x_t</math> to <math>\mu_x</math>. The spherical difference between <math>x_t</math> and <math>\mu_x</math> is represented as <math>R_t = x_t \ominus \mu_x = \theta Q_{x_t, \mu_x}</math>. Assume that <math>R_t</math> is a stationary sequence with the Fréchet mean <math>\mu_R</math>, then <math>SAR(1)</math> is defined as
<math display="block">
R_t - \mu_R = \beta (R_{t-1} - \mu_R) + \epsilon_t,
|