Proper generalized decomposition: Difference between revisions

Content deleted Content added
Kokoo (talk | contribs)
No edit summary
Citation bot (talk | contribs)
Add: bibcode, doi, page, issue, volume. | Use this bot. Report bugs. | Suggested by Dominic3203 | Linked from User:LinguisticMystic/cs/outline | #UCB_webform_linked 1659/2277
 
(10 intermediate revisions by 6 users not shown)
Line 1:
{{Machine learning|Dimensionality barreduction}}
The '''proper generalized decomposition''' ('''PGD''') is an [[iterative method|iterative]] [[numerical method]] for solving [[boundary value problem]]s (BVPs), that is, [[partial differential equation]]s constrained by a set of boundary conditions, such as the [[Poisson's equation]] or the [[Laplace's equation]].
 
The PGD algorithm computes an approximation of the solution of the BVP by successive enrichment. This means that, in each iteration, a new component (or ''mode'') is computed and added to the approximation. In principle, the more modes obtained, the closer the approximation is to its theoretical solution,. although theUnlike [[greedyProper algorithmorthogonal decomposition|greedyPOD]] natureprincipal ofcomponents, thePGD algorithmmodes mayare resultnot innecessarily some modes failing[[orthogonal]] to meeteach thisother.
 
By selecting only the most relevant PGD modes, a [[reduced order model]] of the solution is obtained. Because of this, PGD is considered a [[dimensionality reduction]] algorithm.
Line 12:
# a discretization of the [[Domain of a function|___domain]] in the style of the [[finite element method]],
# the assumption that the solution can be approximated as a separate representation and
# a numerical [[greedy algorithm]] to find the solution.<ref>{{Cite journal|last author1=Amine Ammar, | author2 = Béchir Mokdad, | author3 = Francisco Chinesta, | author4 = Roland Keunings |date=2006 |title=A New Family of Solvers for Some Classes of Multidimensional Partial Differential Equations Encountered in Kinetic Theory Modeling of Complex Fluids| url=https://hal.archives-ouvertes.fr/hal-01004909/document |journal=Journal of Non-Newtonian Fluid Mechanics| volume = 139 | issue = 3 | page = 153 | doi = 10.1016/j.jnnfm.2006.07.007 | bibcode = 2006JNNFM.139..153A }}</ref><ref>{{Cite journal|last author1=Amine Ammar, | author2 = Béchir Mokdad, | author3 = Francisco Chinesta, | author4 = Roland Keunings |date=2007|title=A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modelling of complex fluids. Part II: Transient simulation using space-time separated representations |url=https://hal.archives-ouvertes.fr/hal-01004910/document|journal=Journal of Non-Newtonian Fluid Mechanics| volume = 144 | issue = 2 | page = 98 | doi = 10.1016/j.jnnfm.2007.03.009 | bibcode = 2007JNNFM.144...98A }}</ref>
 
=== Variational formulation ===
In the Proper Generalized Decomposition method, the [[variational formulation]] involves translating the problem into a format where the solution can be approximated by [[Optimization problem|minimizing (or sometimes maximizing)]] a [[Functional (mathematics)|functional]]. A functional is a scalar quantity that depends on a function, which in this case, represents our problem.
The most implemented variational formulation in PGD is the [[Bubnov-Galerkin method]],<ref name=":0">{{Cite thesis|title=Proper generalised decompositions: theory and applications|url=http://orca.cf.ac.uk/73515/|publisher=Cardiff University|date=2015-04-09|degree=phd|language=en|first=Thomas Lloyd David|last=Croft}}</ref><ref>{{Cite book|last=Chinesta|first=Francisco|url=https://www.springer.com/gp/book/9783319028644|title=The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer|last2=Keunings|first2=Roland|last3=Leygue|first3=Adrien|date=2014|publisher=Springer International Publishing|isbn=978-3-319-02864-4|series=SpringerBriefs in Applied Sciences and Technology|language=en}}</ref> although other implementations exist.<ref>{{Cite web|url=https://hal.archives-ouvertes.fr/tel-01926078/document|title=Advanced strategies for the separated formulation of problems in the Proper Generalized Decomposition framework|last=Aguado|first=José Vicente|date=18 Nov 2018}}</ref><ref name=":0" />
 
The most commonly implemented variational formulation in PGD is the [[Bubnov-Galerkin method]],.<ref name=":0">{{Cite thesis |title=Proper generalised decompositions: theory and applications |url=http://orca.cf.ac.uk/73515/ |publisher=Cardiff University |date=2015-04-09 |degree=phd |language=en |first=Thomas Lloyd David |last=Croft}}</ref><ref>{{Cite book |lastlast1=Chinesta |firstfirst1=Francisco |url=https://www.springer.com/gp/book/9783319028644 |title=The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer |last2=Keunings |first2=Roland |last3=Leygue |first3=Adrien |date=2014 |publisher=Springer International Publishing |isbn=978-3-319-02864-4 |series=SpringerBriefs in Applied Sciences and Technology |language=en}}</ref> althoughThis othermethod implementationsis exist.<ref>{{Citechosen webfor its ability to provide an approximate solution to complex problems, such as those described by [[Partial differential equation|url=https://halpartial differential equations]] (PDEs).archives-ouvertes.fr/tel In the Bubnov-01926078/document|title=AdvancedGalerkin strategiesapproach, forthe idea is to project the separatedproblem formulationonto ofa problemsspace inspanned theby Propera Generalizedfinite Decompositionnumber framework|last=Aguado|first=Joséof Vicente[[Basis function|date=18basis Novfunctions]]. 2018}}</ref><refThese name=":0"basis functions are chosen />to approximate the solution space of the problem.
 
In the Bubnov-Galerkin method, we seek an approximate solution that satisfies the integral form of the PDEs over the ___domain of the problem. This is different from directly solving the differential equations. By doing so, the method transforms the problem into finding the coefficients that best fit this integral equation in the chosen function space.
 
While the Bubnov-Galerkin method is prevalent, other variational formulations are also used in PGD,<ref>{{Cite web |last=Aguado |first=José Vicente |date=18 Nov 2018 |title=Advanced strategies for the separated formulation of problems in the Proper Generalized Decomposition framework |url=https://hal.archives-ouvertes.fr/tel-01926078/document}}</ref><ref name=":0" /> depending on the specific requirements and characteristics of the problem, such as:
 
* '''[[Petrov-Galerkin method|Petrov-Galerkin Method]]''': This method is similar to the Bubnov-Galerkin approach but differs in the choice of test functions. In the Petrov-Galerkin method, the test functions (used to project the residual of the differential equation) are different from the trial functions (used to approximate the solution). This can lead to improved stability and accuracy for certain types of problems.<ref>{{Cite thesis |title=Petrov-Galerkin Proper Generalized Decomposition strategies for convection-diffusion problems |url=https://upcommons.upc.edu/handle/2117/343160 |publisher=Universitat Politècnica de Catalunya |date=2020-06-22 |degree=Master thesis |first=Rafel |last=Perelló i Ribas}}</ref>
* '''[[Collocation method|Collocation Method]]''': In collocation methods, the differential equation is satisfied at a finite number of points in the ___domain, known as collocation points. This approach can be simpler and more direct than the integral-based methods like Galerkin's, but it may also be less stable for some problems.
* '''[[Least-squares method|Least Squares Method]]''': This approach involves minimizing the square of the residual of the differential equation over the ___domain. It is particularly useful when dealing with problems where traditional methods struggle with stability or convergence.
* '''[[Mixed finite element method|Mixed Finite Element Method]]''': In mixed methods, additional variables (such as fluxes or gradients) are introduced and approximated along with the primary variable of interest. This can lead to more accurate and stable solutions for certain problems, especially those involving incompressibility or conservation laws.
* '''[[Discontinuous Galerkin method|Discontinuous Galerkin Method]]''': This is a variant of the Galerkin method where the solution is allowed to be discontinuous across element boundaries. This method is particularly useful for problems with sharp gradients or discontinuities.
 
=== Domain discretization ===
Line 22 ⟶ 34:
=== Separate representation ===
PGD assumes that the solution '''u''' of a (multidimensional) problem can be approximated as a separate representation of the form
::<math display="block"> \mathbf{u} \approx \mathbf{u}^N(x_1, x_2, \ldots, x_d) = \sum_{i=1}^N \mathbf{X_1}_i(x_1) \cdot \mathbf{X_2}_i(x_2) \cdots \mathbf{X_d}_i(x_d), </math>
 
::<math> \mathbf{u} \approx \mathbf{u}^N(x_1, x_2, \ldots, x_d) = \sum_{i=1}^N \mathbf{X_1}_i(x_1) \cdot \mathbf{X_2}_i(x_2) \cdots \mathbf{X_d}_i(x_d), </math>
 
where the number of addends ''N'' and the functional products '''X<sub>1</sub>'''(''x''<sub>1</sub>), '''X<sub>2</sub>'''(''x''<sub>2</sub>), ..., '''X<sub>d</sub>'''(''x''<sub>d</sub>), each depending on a variable (or variables), are unknown beforehand.
 
=== Greedy algorithm ===
The solution is sought by applying a [[greedy algorithm]], usually the [[fixed point algorithm]], to the [[weak formulation]] of the problem. For each iteration ''i'' of the algorithm, a ''mode'' of the solution is computed. Each mode consists of a set of numerical values of the functional products '''X<sub>1</sub>'''(''x''<sub>1</sub>), ..., '''X<sub>d</sub>'''(''x''<sub>d</sub>), which ''enrich'' the approximation of the solution. Note that dueDue to the greedy nature of the algorithm, the term 'enrich' is used rather than 'improve', since some modes may actually worsen the approach. The number of computed modes required to obtain an approximation of the solution below a certain error threshold depends on the stopstopping criteriumcriterion of the iterative algorithm. For the same reason and unlike [[Proper orthogonal decomposition|POD]], PGD modes are not necessarily [[orthogonal]] to each other.
 
== Features ==
Line 34 ⟶ 44:
 
Therefore, PGD enables to re-adapt parametric problems into a multidimensional framework by setting the parameters of the problem as extra coordinates:
::<math display="block"> \mathbf{u} \approx \mathbf{u}^N(x_1, \ldots, x_d; k_1, \ldots, k_p) = \sum_{i=1}^N \mathbf{X_1}_i(x_1) \cdots \mathbf{X_d}_i(x_d) \cdot \mathbf{K_1}_i(k_1) \cdots \mathbf{K_p}_i(k_p),</math>
 
::<math> \mathbf{u} \approx \mathbf{u}^N(x_1, \ldots, x_d; k_1, \ldots, k_p) = \sum_{i=1}^N \mathbf{X_1}_i(x_1) \cdots \mathbf{X_d}_i(x_d) \cdot \mathbf{K_1}_i(k_1) \cdots \mathbf{K_p}_i(k_p),</math>
 
where a series of functional products '''K<sub>1</sub>'''(''k''<sub>1</sub>), '''K<sub>2</sub>'''(''k''<sub>2</sub>), ..., '''K<sub>p</sub>'''(''k''<sub>p</sub>), each depending on a parameter (or parameters), has been incorporated to the equation.
 
Line 43 ⟶ 51:
== Sparse Subspace Learning ==
{{Confusing section|date=April 2021}}
The [https://hal.archives-ouvertes.fr/hal-01925360/document Sparse Subspace Learning] (SSL) method leverages the use of hierarchical collocation to approximate the numerical solution of parametric models. With respect to traditional projection-based reduced order modeling, the use of a collocation enables non-intrusive approach based on sparse adaptive sampling of the parametric space. This allows to recover the lowdimensional structure of the parametric solution subspace while also learning the functional dependency from the parameters in explicit form. A sparse low-rank approximate tensor representation of the parametric solution can be built through an incremental strategy that only needs to have access to the output of a deterministic solver. Non-intrusiveness makes this approach straightforwardly applicable to challenging problems characterized by nonlinearity or non affine weak forms.<ref>{{Cite journal|lastlast1=Borzacchiello|firstfirst1=Domenico|last2=Aguado|first2=José V.|last3=Chinesta|first3=Francisco|date=April 2019|title=Non-intrusive Sparse Subspace Learning for Parametrized Problems|url=http://link.springer.com/10.1007/s11831-017-9241-4|journal=Archives of Computational Methods in Engineering|language=en|volume=26|issue=2|pages=303–326|doi=10.1007/s11831-017-9241-4|s2cid=126121268 |issn=1134-3060|hdl=10985/18435|hdl-access=free}}</ref>
 
== References ==