Content deleted Content added
m Date the unreferenced tag (approx) using AWB |
Editing2000 (talk | contribs) No edit summary |
||
(48 intermediate revisions by 31 users not shown) | |||
Line 1:
In [[control theory]], a '''control-Lyapunov function (CLF)'''<ref name="Isidori">{{cite book
| author = Isidori, A.
| year = 1995
| title = Nonlinear Control Systems
| publisher = Springer
| isbn = 978-3-540-19916-8
}}</ref><ref>{{cite book
|last=Freeman
|first=Randy A.
|author2=Petar V. Kokotović
|title=Robust Nonlinear Control Design
|chapter=Robust Control Lyapunov Functions
|chapter-url=https://link.springer.com/chapter/10.1007/978-0-8176-4759-9_3
|publisher=Birkhäuser
|year=2008|pages=33–63
|doi=10.1007/978-0-8176-4759-9_3
|edition=illustrated, reprint
|isbn=978-0-8176-4758-2|
url=https://books.google.com/books?id=_eTb4Yl0SOEC|
accessdate=2009-03-04}}</ref><ref>{{cite book
| last = Khalil | first = Hassan
| year = 2015 | title = Nonlinear Control
| publisher = Pearson | isbn = 9780133499261}}</ref><ref name="Sontag (1998)">{{cite book | last = Sontag | first = Eduardo | author-link = Eduardo D. Sontag | year = 1998 | title = Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition | publisher = Springer | url = http://www.sontaglab.org/FTPDIR/sontag_mathematical_control_theory_springer98.pdf | isbn = 978-0-387-98489-6 }}</ref> is an extension of the idea of [[Lyapunov function]] <math>V(x)</math> to [[Control system|systems with control inputs]]. The ordinary Lyapunov function is used to test whether a [[dynamical system]] is [[Lyapunov stability|''(Lyapunov) stable'']] or (more restrictively) ''asymptotically stable''. Lyapunov stability means that if the system starts in a state <math>x \ne 0</math> in some ___domain ''D'', then the state will remain in ''D'' for all time. For ''asymptotic stability'', the state is also required to converge to <math>x = 0</math>. A control-Lyapunov function is used to test whether a system is [[Controllability#Stabilizability|''asymptotically stabilizable'']], that is whether for any state ''x'' there exists a control <math>u(x,t)</math> such that the system can be brought to the zero state asymptotically by applying the control ''u''.
The theory and application of control-Lyapunov functions were developed by [[Zvi Artstein]] and [[Eduardo D. Sontag]] in the 1980s and 1990s.
==Definition==
Consider an [[Autonomous system (mathematics)|autonomous dynamical]] system with inputs
{{NumBlk|:|<math>\dot{x}=f(x,u)</math>|{{EquationRef|1}}}}
where <math>x\in\mathbb{R}^n</math> is the state vector and <math>u\in\mathbb{R}^m</math> is the control vector. Suppose our goal is to drive the system to an equilibrium <math>x_* \in \mathbb{R}^n</math> from every initial state in some ___domain <math>D\subset\mathbb{R}^n</math>. Without loss of generality, suppose the equilibrium is at <math>x_*=0</math> (for an equilibrium <math>x_*\neq 0</math>, it can be translated to the origin by a change of variables).
'''Definition.''' A control-Lyapunov function (CLF) is a function <math>V : D \to \mathbb{R}</math> that is [[Differentiable function#continuously differentiable|continuously differentiable]], positive-definite (that is, <math>V(x)</math> is positive for all <math>x\in D</math> except at <math>x=0</math> where it is zero), and such that for all <math>x \in \mathbb{R}^n (x \neq 0),</math> there exists <math>u\in \mathbb{R}^m</math> such that
:<math>
\dot{
</math>
where <math>\langle u, v\rangle</math> denotes the [[inner product]] of <math>u, v \in\mathbb{R}^n</math>.
The last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by [[Artstein's theorem]].
Some results apply only to control-affine systems—i.e., control systems in the following form:
{{NumBlk|:|<math>\dot x = f(x) + \sum_{i=1}^m g_i(x)u_i</math>|{{EquationRef|2}}}}
where <math>f : \mathbb{R}^n \to \mathbb{R}^n</math> and <math>g_i : \mathbb{R}^n \to \mathbb{R}^{n}</math> for <math>i = 1, \dots, m</math>.
==Theorems==
[[ Eduardo Sontag]] showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable.<ref>{{cite journal |first=E.D. |last=Sontag |title=A Lyapunov-like characterization of asymptotic controllability|journal=SIAM J. Control Optim.|volume=21 |issue=3 |year=1983 |pages=462–471|doi=10.1137/0321028 |s2cid=450209 }}</ref> It was later shown by [[Francis Clarke (mathematician)|Francis H. Clarke]], Yuri Ledyaev, [[Eduardo Sontag]], and A.I. Subbotin that every [[Controllability|asymptotically controllable]] system can be stabilized by a (generally discontinuous) feedback.<ref>{{cite journal |first1=F.H.|last1=Clarke |first2=Y.S.|last2=Ledyaev |first3=E.D.|last3=Sontag |first4=A.I.|last4=Subbotin |title=Asymptotic controllability implies feedback stabilization |journal=IEEE Trans. Autom. Control|volume=42 |issue=10 |year=1997 |pages=1394–1407|doi=10.1109/9.633828 }}</ref>
Artstein proved that the dynamical system ({{EquationNote|2}}) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').
=== Constructing the Stabilizing Input ===
It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system ({{EquationNote|2}}), ''Sontag's formula'' (or ''Sontag's universal formula'') gives the feedback law <math>k : \mathbb{R}^n \to \mathbb{R}^m</math> directly in terms of the derivatives of the CLF.<ref name="Sontag (1998)"/>{{rp|Eq. 5.56
}} In the special case of a single input system <math>(m=1)</math>, Sontag's formula is written as
:<math>k(x) = \begin{cases} \displaystyle -\frac{L_{f} V(x)+\sqrt{\left[L_{f} V(x)\right]^{2}+\left[L_{g} V(x)\right]^{4}}}{L_{g} V(x)} & \text { if } L_{g} V(x) \neq 0 \\
0 & \text { if } L_{g} V(x)=0 \end{cases} </math>
where <math>L_f V(x) := \langle \nabla V(x), f(x)\rangle</math> and <math>L_g V(x) := \langle \nabla V(x), g(x)\rangle</math> are the [[Lie derivative|Lie derivatives]] of <math>V</math> along <math>f</math> and <math>g</math>, respectively.
For the general nonlinear system ({{EquationNote|1}}), the input <math>u</math> can be found by solving a static non-linear [[optimization (mathematics)|programming problem]]
:<math>
u^*(x) = \underset{u}{\operatorname{arg\,min}} \nabla V(x) \cdot f(x,u)
</math>
for each state ''x''.
==Example==
Here is a characteristic example of applying a Lyapunov candidate function to a control problem.
Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by
:<math>
m(1+q^2)\ddot{q}+b\dot{q}+K_0q+K_1q^3=u
</math>
Now given the desired state, <math>q_d</math>, and actual state, <math>q</math>, with error, <math>e = q_d - q</math>, define a function <math>r</math> as
:<math>
r=\dot{e}+\alpha e
</math>
A Control-Lyapunov candidate is then
:<math>
r \mapsto V(r) :=\frac{1}{2}r^2
</math>
which is positive for all <math> r \ne 0</math>.
Now taking the time derivative of <math>V</math>
:<math>
\dot{V}=r\dot{r}
</math>
:<math>
\dot{V}=(\dot{e}+\alpha e)(\ddot{e}+\alpha \dot{e})
</math>
The goal is to get the time derivative to be
:<math>
\dot{V}=-\kappa V
</math>
which is globally exponentially stable if <math>V</math> is globally positive definite (which it is).
Hence we want the rightmost bracket of <math>\dot{V}</math>,
:<math>
(\ddot{e}+\alpha \dot{e})=(\ddot{q}_d-\ddot{q}+\alpha \dot{e})
</math>
to fulfill the requirement
:<math>
(\ddot{q}_d-\ddot{q}+\alpha \dot{e}) = -\frac{\kappa}{2}(\dot{e}+\alpha e)
</math>
which upon substitution of the dynamics, <math>\ddot{q}</math>, gives
:<math>
\left(\ddot{q}_d-\frac{u-K_0q-K_1q^3-b\dot{q}}{m(1+q^2)}+\alpha \dot{e}\right) = -\frac{\kappa}{2}(\dot{e}+\alpha e)
</math>
Solving for <math>u</math> yields the control law
:<math>
u= m(1+q^2)\left(\ddot{q}_d + \alpha \dot{e}+\frac{\kappa}{2}r\right)+K_0q+K_1q^3+b\dot{q}
</math>
with <math>\kappa</math> and <math>\alpha</math>, both greater than zero, as tunable parameters
This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected
:<math>
\dot{V}=-\kappa V
</math>
which is a linear first order differential equation which has solution
:<math>
V=V(0)\exp(-\kappa t)
</math>
And hence the error and error rate, remembering that <math>V=\frac{1}{2}(\dot{e}+\alpha e)^2</math>, exponentially decay to zero.
If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for <math>V</math> and solve for <math>e</math>. This is left as an exercise for the reader but the first few steps at the solution are:
:<math>
r\dot{r}=-\frac{\kappa}{2}r^2
</math>
:<math>
\dot{r}=-\frac{\kappa}{2}r
</math>
:<math>
r=r(0)\exp\left(-\frac{\kappa}{2} t\right)
</math>
:<math>
\dot{e}+\alpha e= (\dot{e}(0)+\alpha e(0))\exp\left(-\frac{\kappa}{2} t\right)
</math>
which can then be solved using any linear differential equation methods.
==References==
{{Reflist}}
==See also==
* [[Artstein's theorem]]
* [[Lyapunov optimization]]
* [[Drift plus penalty]]
[[Category:
|