Probabilistic learning on manifolds: Difference between revisions

Content deleted Content added
Inclusion of a new reference
Inclusion of a new reference
Line 3:
<!-- Once discussion is closed, please place on talk page: {{Old AfD multi|page=Probabilistic learning on manifolds|date=1 June 2021|result='''keep'''}} -->
<!-- End of AfD message, feel free to edit beyond this point -->
The '''probabilistic learning on manifolds''' (PLoM)<ref>[https://www.sciencedirectaimsciences.com/scienceorg/article/piidoi/S0021999116301899 Data-driven probability concentration10.3934/fods.2020013 andProbabilistic samplinglearning on manifoldmanifolds
]</ref> is a novel type [[machine learning]] technique to construct learned datasets from a given small dataset.
 
Originally proposed by [[Christian Soize]] and [[Roger Ghanem]]<ref>[https://www.sciencedirect.com/science/article/pii/S0021999116301899 Data-driven probability concentration and sampling on manifold]</ref> in 2016, the methodology has been gaining ground in several [[machine learning]] applications<ref>[https://www.sciencedirect.com/science/article/abs/pii/S0045782521001134 Probabilistic learning on manifolds constrained by nonlinear partial differential equations for small datasets]</ref> in [[computational science and engineering]], especially in [[inverse problems]], [[optimization]], and [[uncertainty quantification]], where it is often necessary to evaluate an extremely costly function defined by a computational model. In this context, initially, the expensive computational model is used to generate a small initial dataset, which is used in the learning process of the PLoM technique, which therefore generates a large secondary dataset whose distribution emulates the baseline dataset distribution.