Distributional data analysis: Difference between revisions

Content deleted Content added
Notation: display block
m v2.05 - Autofix / Fix errors for CW project (Link equal to linktext)
 
(7 intermediate revisions by 5 users not shown)
Line 2:
 
{{Orphan|date=December 2023}}
'''Distributional data analysis''' is a branch of [[nonparametric statistics]] that is related to [[functional data analysis]]. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that although the space of probability distributions is, while a convex space, it is not a [[vector space]].
 
== Notation ==
Line 32:
 
=== Functional principal component analysis ===
[[Functional principal component analysis|Functional]] principal component analysis(FPCA)]] can be directly applied to the probability density functions.<ref>{{Cite journal|last1=Kneip|first1=A.|last2=Utikal|first2=K.J.|date=2001|title=Inference for density families using functional principal component analysis|journal=Journal of the American Statistical Association|volume=96|issue=454|pages=519–532|doi=10.1198/016214501753168235|s2cid=123524014 }}</ref> Consider a distribution process <math>\nu \sim \mathfrak{F}</math> and let <math>f</math> be the density function of <math>\nu</math>. Let the mean density function as <math>\mu(t) = \mathbb{E}\left[f(t)\right]</math> and the covariance function as <math>G(s,t) = \operatorname{Cov}(f(s), f(t))</math> with orthonormal eigenfunctions <math>\{\phi_j\}_{j=1}^\infty</math> and eigenvalues <math>\{\lambda_j\}_{j=1}^\infty</math>.
 
By the Karhunen-Loève theorem, <math>
Line 43:
=== Transformation FPCA ===
Assume the probability density functions <math>f</math> exist, and let <math>\mathcal{F}_f</math> be the space of density functions.
Transformation approaches introduce a continuous and invertible transformation <math>\Psi: \mathcal{F}_f \to \mathbb{H}</math>, where <math>\mathbb{H}</math> is a [[Hilbert space]] of functions. For instance, the log quantile density transformation or the centered log ratio transformation are popular choices.<ref>{{Cite journal|last1=Petersen|first1=A.|last2=Müller|first2=H.-G.|date=2016|title=Functional data analysis for density functions by transformation to a Hilbert space|journal=Annals of Statistics|volume=44|issue=1|pages=183–218|doi=10.1214/15-AOS1363|doi-access=free|arxiv=1601.02869}}</ref><ref>{{Cite journal|last1=van den Boogaart|first1=K.G.|last2=Egozcue|first2=J.J.|last3=Pawlowsky-Glahn|first3=V.|date=2014|title=Bayes Hilbert spaces|journal=Australian and New Zealand Journal of Statistics|volume=56|issue=2|pages=171–194|doi=10.1111/anzs.12074|s2cid=120612578 }}</ref>
 
For <math>f \in \mathcal{F}_f</math>, let <math>Y = \Psi(f)</math>, the transformed functional variable. The mean function <math>\mu_Y(t) = \mathbb{E}\left[Y(t)\right]</math> and the covariance function <math>G_Y(s,t) = \operatorname{Cov}(Y(s), Y(t))</math> are defined accordingly, and let <math>\{\lambda_j, \phi_j\}_{j=1}^\infty</math> be the eigenpairs of <math>G_Y(s,t)</math>. The Karhunen-Loève decomposition gives
<math>Y(t) = \mu_Y(t) + \sum_{j=1}^\infty \xi_j \phi_j(t)</math>, where <math>\xi_j = \int_D [Y(t) - \mu_Y(t)] \phi_j(t) dt</math>. Then, the <math>j</math>th transformation mode of variation is defined as<ref>{{Cite journal|last1=Petersen|first1=A.|last2=Müller|first2=H.-G.|date=2016|title=Functional data analysis for density functions by transformation to a Hilbert space|journal=Annals of Statistics|volume=44|issue=1|pages=183–218|doi=10.1214/15-AOS1363|doi-access=free|arxiv=1601.02869}}</ref>
<math>
g_{j}^{TF}(t, \alpha) = \Psi^{-1} \left( \mu_Y + \alpha \sqrt{\lambda_j}\phi_j \right)(t), \quad t \in D, \; \alpha \in [-A, A].
Line 63:
</math>
Let the reference measure <math>\nu_0</math> be the Wasserstein mean <math>\mu_\oplus</math>.
Then, a ''principal geodesic subspace (PGS)'' of dimension <math>k</math> with respect to <math>\mu_\oplus</math> is a set <math>G_k = \operatorname{argmin}_{G \in \text{CG}_{\nu_\oplus, k}(\mathcal{W}_2)} K_{W_2}(G)</math>.<ref name="gpca1">{{Cite journal|last1=Bigot|first1=J.|last2=Gouet|first2=R.|last3=Klein|first3=T.|last4=López|first4=A.|date=2017|title=Geodesic PCA in the Wasserstein space by convex PCA|journal= Annales de l'institutInstitut Henri Poincare (B)Poincaré, ProbabilityProbabilités andet StatisticsStatistiques|volume=53|issue=1|pages=1–26|doi=10.1214/15-AIHP706|bibcode=2017AnIHP..53....1B |s2cid=49256652 |url=https://hal.archives-ouvertes.fr/hal-01978864/file/AIHP706.pdf }}</ref><ref name="gpca2">{{Cite journal|last1=Cazelles|first1=E.|last2=Seguy|first2=V.|last3=Bigot|first3=J.|last4=Cuturi|first4=M.|last5=Papadakis|first5=N.|date=2018|title=Geodesic PCA versus Log-PCA of histograms in the Wasserstein space|journal=SIAM Journal on Scientific Computing|volume=40|issue=2|pages=B429–B456|doi=10.1137/17M1143459 |bibcode=2018SJSC...40B.429C }}</ref>
 
Note that the tangent space <math>T_{\mu_\oplus}</math> is a subspace of <math>L^2_{\mu_\oplus}</math>, the Hilbert space of <math>{\mu_\oplus}</math>-square-integrable functions. Obtaining the PGS is equivalent to performing PCA in <math>L^2_{\mu_\oplus}</math> under constraints to lie in the convex and closed subset.<ref name="gpca2"/> Therefore, a simple approximation of the Wasserstein Geodesic PCA is the Log FPCA by relaxing the geodesicity constraint, while alternative techniques are suggested.<ref name="gpca1"/><ref name="gpca2"/>
Line 69:
== Distributional regression ==
=== Fréchet regression ===
Fréchet regression is a generalization of regression with responses taking values in a metric space and Euclidean predictors.<ref name="freg">{{Cite journal|last1=Petersen|first1=A.|last2=Müller|first2=H.-G.|date=2019|title=Fréchet regression for random objects with Euclidean predictors|journal=Annals of Statistics|volume=47|issue=2|pages=691–719|doi=10.1214/17-AOS1624 |doi-access=free|arxiv=1608.03012}}</ref><ref name="review">{{Cite journal|last1=Petersen|first1=A.|last2=Zhang|first2=C.|last3=Kokoszka|first3=P.|date=2022|title=Modeling probability density functions as data objects|journal=Econometrics and Statistics|volume=21|pages=159–178|doi=10.1016/j.ecosta.2021.04.004 |s2cid=236589040 }}</ref> Using the Wasserstein metric <math>d_{W_2}</math>, Fréchet regression models can be applied to distributional objects. The global Wasserstein-Fréchet regression model is defined as
{{NumBlk|::|<math display="block">\begin{align}
m_\oplus (x) &= \operatorname{argmin}_{\omega \in \mathcal{F}} \mathbb{E}\left[ s_G(X,x) d_{W_2}^2(\nu,\omega) \right], \\
Line 93:
\Gamma g(t) = \langle \beta(\cdot, t),g \rangle_{\omega{\oplus}}, \; t \in D, \; g \in T_{\omega{\oplus}}, \; \beta:D^2 \to \R.</math>
Estimation of the regression operator is based on empirical estimators obtained from samples.<ref>{{Cite journal|last1=Chen|first1=Y.|last2=Lin|first2=Z.|last3=Müller|first3=H.-G.|date=2023|title=Wasserstein regression|journal=Journal of the American Statistical Association|volume=118|issue=542|pages=869–882|doi=10.1080/01621459.2021.1956937 |s2cid=219721275 }}</ref>
Also, the Fisher-Rao metric <math>d_{FR}</math> can be used in a similar fashion.<ref name="review"/><ref name="dai2022">{{Cite journal|last1=Dai|first1=X.|date=2022|title=Statistical inference on the Hilbert sphere with application to random densities|journal=Electronic Journal of Statistics|volume=16|issue=1|pages=700–736|doi=10.1214/21-EJS1942 |doi-access=free|arxiv=2101.00527}}</ref>
 
== Hypothesis testing ==
Line 144:
</math>
 
On the other hand, the spherical autoregressive model (SAR) considers the Fisher-Rao metric.<ref>{{Cite journal|last1=Zhu|first1=C.|last2=Müller|first2=H.-G.|date=2023|title=Spherical autoregressive models, with application to distributional and compositional time series|journal=Journal of Econometrics|volume=239 |issue=2 |doi=10.1016/j.jeconom.2022.12.008 |doi-access=free|arxiv=2203.12783}}</ref> Following the settings of [[##Tests for the intrinsic mean]], let <math>x_t \in \mathcal{X}</math> with Fréchet mean <math>\mu_x</math>. Let <math>\theta = \arccos(\langle x_t, \mu_x \rangle )</math>, which is the geodesic distance between <math>x_t</math> and <math>\mu_x</math>. Define a rotation operator <math>Q_{x_t, \mu_x}</math> that rotates <math>x_t</math> to <math>\mu_x</math>. The spherical difference between <math>x_t</math> and <math>\mu_x</math> is represented as <math>R_t = x_t \ominus \mu_x = \theta Q_{x_t, \mu_x}</math>. Assume that <math>R_t</math> is a stationary sequence with the Fréchet mean <math>\mu_R</math>, then <math>SAR(1)</math> is defined as
<math display="block">
R_t - \mu_R = \beta (R_{t-1} - \mu_R) + \epsilon_t,