Control-Lyapunov function: Difference between revisions

Content deleted Content added
No edit summary
 
(39 intermediate revisions by 24 users not shown)
Line 1:
In [[control theory]], a '''control-Lyapunov function (CLF)'''<ref name="Isidori">{{cite book
In [[control theory]], a '''control-Lyapunov function''' <math>V(x,u)</math> <ref>Freeman (46)</ref>is a generalization of the notion of [[Lyapunov function]] <math>V(x)</math> used in [[Lyapunov stability|stability]] analysis. The ordinary Lyapunov function is used to test whether a [[dynamical system]] is ''stable'' (more restrictively, ''asymptotically stable''). That is, whether the system starting in a state <math>x \ne 0</math> in some ___domain ''D'' will remain in ''D'', or for ''asymptotic stability'' will eventually return to <math>x = 0</math>. The control-Lyapunov function is used to test whether a system is ''feedback stabilizable'', that is whether for any state ''x'' there exists a control <math> u(x,t)</math> such that the system can be brought to the zero state by applying the control ''u''.
| author = Isidori, A.
| year = 1995
| title = Nonlinear Control Systems
| publisher = Springer
| isbn = 978-3-540-19916-8
}}</ref><ref>{{cite book
|last=Freeman
|first=Randy A.
|author2=Petar V. Kokotović
|title=Robust Nonlinear Control Design
|chapter=Robust Control Lyapunov Functions
|chapter-url=https://link.springer.com/chapter/10.1007/978-0-8176-4759-9_3
|publisher=Birkhäuser
|year=2008|pages=33–63
|doi=10.1007/978-0-8176-4759-9_3
|edition=illustrated, reprint
|isbn=978-0-8176-4758-2|
url=https://books.google.com/books?id=_eTb4Yl0SOEC|
accessdate=2009-03-04}}</ref><ref>{{cite book
| last = Khalil | first = Hassan
| year = 2015 | title = Nonlinear Control
In| [[controlpublisher theory]],= aPearson '''control-Lyapunov| function'''isbn <math>V(x,u)= 9780133499261}}</mathref> <ref>Freeman name="Sontag (461998)">{{cite book | last = Sontag | first = Eduardo | author-link = Eduardo D. Sontag | year = 1998 | title = Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition | publisher = Springer | url = http://www.sontaglab.org/FTPDIR/sontag_mathematical_control_theory_springer98.pdf | isbn = 978-0-387-98489-6 }}</ref> is aan generalizationextension of the notionidea of [[Lyapunov function]] <math>V(x)</math> used into [[LyapunovControl stabilitysystem|stabilitysystems with control inputs]] analysis. The ordinary Lyapunov function is used to test whether a [[dynamical system]] is [[Lyapunov stability|''(Lyapunov) stable'']] or (more restrictively,) ''asymptotically stable''). ThatLyapunov is,stability whethermeans that if the system startingstarts in a state <math>x \ne 0</math> in some ___domain ''D'', then the state will remain in ''D'', or for all time. For ''asymptotic stability'', willthe eventuallystate returnis also required to converge to <math>x = 0</math>. TheA control-Lyapunov function is used to test whether a system is [[Controllability#Stabilizability|''feedbackasymptotically stabilizable'']], that is whether for any state ''x'' there exists a control <math> u(x,t)</math> such that the system can be brought to the zero state asymptotically by applying the control ''u''.
 
The theory and application of control-Lyapunov functions were developed by Z.[[Zvi Artstein]] and [[Eduardo D. Sontag|E. D. Sontag]] in the 1980s and 1990s.
More formally, suppose we are given a dynamical system
 
:<math>
==Definition==
\dot{x}(t)=f(x(t))+g(x(t))\, u(t),
 
</math>
Consider an [[Autonomous system (mathematics)|autonomous dynamical]] system with inputs
where the state ''x''(''t'') and the control ''u''(''t'') are vectors.
{{NumBlk|:|<math>\dot{x}=f(x,u)</math>|{{EquationRef|1}}}}
where <math>x\in\mathbb{R}^n</math> is the state vector and <math>u\in\mathbb{R}^m</math> is the control vector. Suppose our goal is to drive the system to an equilibrium <math>x_* \in \mathbb{R}^n</math> from every initial state in some ___domain <math>D\subset\mathbb{R}^n</math>. Without loss of generality, suppose the equilibrium is at <math>x_*=0</math> (for an equilibrium <math>x_*\neq 0</math>, it can be translated to the origin by a change of variables).
 
'''Definition.''' A control-Lyapunov function (CLF) is a function <math>V(x,u) : D \to \mathbb{R}</math> that is continuous[[Differentiable function#continuously differentiable|continuously differentiable]], positive-definite (that is, <math>V(x,u)</math> is positive for all <math>x\in D</math> except at <math>x=0</math> where it is zero), properand such (that isfor all <math>V(x) \toin \inftymathbb{R}^n (x \neq 0),</math> asthere exists <math>|x|u\toin \inftymathbb{R}^m</math>), and such that
:<math>
\forall dot{V}(x \ne 0, \exists u) := \qquadlangle \dot{nabla V}(x), f(x,u)\rangle < 0.,
</math>
where <math>\langle u, v\rangle</math> denotes the [[inner product]] of <math>u, v \in\mathbb{R}^n</math>.
 
The last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by the[[Artstein's following result:theorem]].
 
Some results apply only to control-affine systems—i.e., control systems in the following form:
'''Artstein's theorem.''' The dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').
{{NumBlk|:|<math>\dot x = f(x) + \sum_{i=1}^m g_i(x)u_i</math>|{{EquationRef|2}}}}
where <math>f : \mathbb{R}^n \to \mathbb{R}^n</math> and <math>g_i : \mathbb{R}^n \to \mathbb{R}^{n}</math> for <math>i = 1, \dots, m</math>.
 
==Theorems==
It may not be easy to find a control-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear [[optimization (mathematics)|programming problem]]
 
[[ Eduardo Sontag]] showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable.<ref>{{cite journal |first=E.D. |last=Sontag |title=A Lyapunov-like characterization of asymptotic controllability|journal=SIAM J. Control Optim.|volume=21 |issue=3 |year=1983 |pages=462–471|doi=10.1137/0321028 |s2cid=450209 }}</ref> It was later shown by [[Francis Clarke (mathematician)|Francis H. Clarke]], Yuri Ledyaev, [[Eduardo Sontag]], and A.I. Subbotin that every [[Controllability|asymptotically controllable]] system can be stabilized by a (generally discontinuous) feedback.<ref>{{cite journal |first1=F.H.|last1=Clarke |first2=Y.S.|last2=Ledyaev |first3=E.D.|last3=Sontag |first4=A.I.|last4=Subbotin |title=Asymptotic controllability implies feedback stabilization |journal=IEEE Trans. Autom. Control|volume=42 |issue=10 |year=1997 |pages=1394–1407|doi=10.1109/9.633828 }}</ref>
'''Artstein's theorem.'''proved that Thethe dynamical system ({{EquationNote|2}}) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').
 
=== Constructing the Stabilizing Input ===
It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system ({{EquationNote|2}}), ''Sontag's formula'' (or ''Sontag's universal formula'') gives the feedback law <math>k : \mathbb{R}^n \to \mathbb{R}^m</math> directly in terms of the derivatives of the CLF.<ref name="Sontag (1998)"/>{{rp|Eq. 5.56
}} In the special case of a single input system <math>(m=1)</math>, Sontag's formula is written as
:<math>k(x) = \begin{cases} \displaystyle -\frac{L_{f} V(x)+\sqrt{\left[L_{f} V(x)\right]^{2}+\left[L_{g} V(x)\right]^{4}}}{L_{g} V(x)} & \text { if } L_{g} V(x) \neq 0 \\
0 & \text { if } L_{g} V(x)=0 \end{cases} </math>
where <math>L_f V(x) := \langle \nabla V(x), f(x)\rangle</math> and <math>L_g V(x) := \langle \nabla V(x), g(x)\rangle</math> are the [[Lie derivative|Lie derivatives]] of <math>V</math> along <math>f</math> and <math>g</math>, respectively.
 
For the general nonlinear system ({{EquationNote|1}}), the input <math>u</math> can be found by solving a static non-linear [[optimization (mathematics)|programming problem]]
:<math>
u^*(x) = \underset{u}{\operatorname{arg\min_u,min}} \nabla V(x,u) \cdot f(x,u)
</math>
for each state ''x''.
 
The theory and application of control-Lyapunov functions were developed by Z. Artstein and [[Eduardo D. Sontag|E. D. Sontag]] in the 1980s and 1990s.
==Example==
Here is a characteristic example of applying a LyaponovLyapunov candidate function to a control problem.
 
Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependantdependent mass described by
:<math>
m(1+q^2)\ddot{q}+b\dot{q}+K_0q+K_1q^3=u
Line 34 ⟶ 72:
r=\dot{e}+\alpha e
</math>
A Control-LyaponovLyapunov candidate is then
:<math>
r \mapsto V(r) :=\frac{1}{2}r^2
</math>
which is positive definite for all <math> q \ne 0</math>, <math>\dot{q}r \ne 0</math>.
 
Now taking the time derivitivederivative of <math>V</math>
:<math>
\dot{V}=r\dot{r}
</math>
:<math>
\dot{V}=(\dot{e}+\alpha e)(\ddot{e}+\alpha \dot{e})
</math>
 
The goal is to get the time derivitivederivative to be
:<math>
\dot{V}=-\kappa V
Line 61 ⟶ 102:
which upon substitution of the dynamics, <math>\ddot{q}</math>, gives
:<math>
\left(\ddot{q}_d-\frac{u-K_0q-K_1q^3-b\dot{q}}{m(1+q^2)}+\alpha \dot{e}\right) = -\frac{\kappa}{2}(\dot{e}+\alpha e)
</math>
Solving for <math>u</math> yields the control law
:<math>
u= m(1+q^2)\left(\ddot{q}_d + \alpha \dot{e}+\frac{\kappa}{2}r \right)+K_0q+K_1\dot{q}K_1q^3+b\dot{q}
</math>
with <math>\kappa</math> and <math>\alpha</math>, both greater than zero, as tunable parameters
 
This control law will guarenteeguarantee global exponential stability since upon substitution into the time derivitivederivative yields, as expected
:<math>
\dot{V}=-\kappa V
Line 75 ⟶ 116:
which is a linear first order differential equation which has solution
:<math>
V=V(0)e^{\exp(-\kappa t})
</math>
 
And hence the error and error rate, remembering that <math>V=\frac{1}{2}(\dot{e}+\alpha e)^2</math>, exponentially decay to zero.
 
If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for <math>V</math> and solve for <math>e</math>. This is left as an exercise for the reader but the first few steps at the solution are:
This example makes explicit use of [[feedback linearisation]] and can be seen to contain a feedforward and feedback component
 
:<math>
==Notes==
r\dot{r}=-\frac{\kappa}{2}r^2
</math>
:<math>
\dot{r}=-\frac{\kappa}{2}r
</math>
:<math>
r=r(0)\exp\left(-\frac{\kappa}{2} t\right)
</math>
:<math>
\dot{e}+\alpha e= (\dot{e}(0)+\alpha e(0))\exp\left(-\frac{\kappa}{2} t\right)
</math>
which can then be solved using any linear differential equation methods.
 
{{Reflist}}
==References==
 
{{Reflist}}
*{{cite book|last=Freeman|first=Randy A.|coauthors=Petar V. Kokotović|title=Robust Nonlinear Control Design|publisher=Birkhäuser|year=2008|edition=illustrated, reprint|pages=257|isbn=0817647589|url=http://books.google.com/books?id=_eTb4Yl0SOEC|accessdate=2009-03-04|language=English}}
 
 
 
==See also==
* [[Artstein's theorem]]
* [[Lyapunov optimization]]
* [[Drift plus penalty]]
 
[[Category:Stability theory]]