Content deleted Content added
The control Lyapunov function (CLF) is a function of x, i.e. V(x) not V(x,u). |
Editing2000 (talk | contribs) No edit summary |
||
(27 intermediate revisions by 17 users not shown) | |||
Line 1:
In [[control theory]], a '''control-Lyapunov function (CLF)'''<ref name="Isidori">{{cite book
In [[control theory]], a '''control-Lyapunov function'''<ref>Freeman (46)</ref> is a [[Lyapunov function]] <math>V(x)</math> for a system with control inputs. The ordinary Lyapunov function is used to test whether a [[dynamical system]] is ''stable'' (more restrictively, ''asymptotically stable''). That is, whether the system starting in a state <math>x \ne 0</math> in some ___domain ''D'' will remain in ''D'', or for ''asymptotic stability'' will eventually return to <math>x = 0</math>. The control-Lyapunov function is used to test whether a system is ''feedback stabilizable'', that is whether for any state ''x'' there exists a control <math> u(x,t)</math> such that the system can be brought to the zero state by applying the control ''u''.▼
| author = Isidori, A.
| year = 1995
| title = Nonlinear Control Systems
| publisher = Springer
| isbn = 978-3-540-19916-8
}}</ref><ref>{{cite book
|last=Freeman
|first=Randy A.
|author2=Petar V. Kokotović
|title=Robust Nonlinear Control Design
|chapter=Robust Control Lyapunov Functions
|chapter-url=https://link.springer.com/chapter/10.1007/978-0-8176-4759-9_3
|publisher=Birkhäuser
|year=2008|pages=33–63
|doi=10.1007/978-0-8176-4759-9_3
|edition=illustrated, reprint
|isbn=978-0-8176-4758-2|
url=https://books.google.com/books?id=_eTb4Yl0SOEC|
accessdate=2009-03-04}}</ref><ref>{{cite book
| last = Khalil | first = Hassan
| year = 2015 | title = Nonlinear Control
▲
The theory and application of control-Lyapunov functions were developed by
==Definition==
'''Definition.''' A control-Lyapunov function is a function <math>V:D\rightarrow\mathbf{R}</math> that is continuously differentiable, positive-definite (that is <math>V(x)</math> is positive except at <math>x=0</math> where it is zero), and such that▼
Consider an [[Autonomous system (mathematics)|autonomous dynamical]] system with inputs
{{NumBlk|:|<math>\dot{x}=f(x,u)</math>|{{EquationRef|1}}}}
where <math>x\in\mathbb{R}^n</math> is the state vector and <math>u\in\mathbb{R}^m</math> is the control vector. Suppose our goal is to drive the system to an equilibrium <math>x_* \in \mathbb{R}^n</math> from every initial state in some ___domain <math>D\subset\mathbb{R}^n</math>. Without loss of generality, suppose the equilibrium is at <math>x_*=0</math> (for an equilibrium <math>x_*\neq 0</math>, it can be translated to the origin by a change of variables).
▲'''Definition.'''
:<math>
</math>
where <math>\langle u, v\rangle</math> denotes the [[inner product]] of <math>u, v \in\mathbb{R}^n</math>.
The last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by
Some results apply only to control-affine systems—i.e., control systems in the following form:
{{NumBlk|:|<math>\dot x = f(x) + \sum_{i=1}^m g_i(x)u_i</math>|{{EquationRef|2}}}}
where <math>f : \mathbb{R}^n \to \mathbb{R}^n</math> and <math>g_i : \mathbb{R}^n \to \mathbb{R}^{n}</math> for <math>i = 1, \dots, m</math>.
==Theorems==
[[ Eduardo Sontag]] showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable.<ref>{{cite journal |first=E.D. |last=Sontag |title=A Lyapunov-like characterization of asymptotic controllability|journal=SIAM J. Control Optim.|volume=21 |issue=3 |year=1983 |pages=462–471|doi=10.1137/0321028 |s2cid=450209 }}</ref> It was later shown by [[Francis Clarke (mathematician)|Francis H. Clarke]], Yuri Ledyaev, [[Eduardo Sontag]], and A.I. Subbotin that every [[Controllability|asymptotically controllable]] system can be stabilized by a (generally discontinuous) feedback.<ref>{{cite journal |first1=F.H.|last1=Clarke |first2=Y.S.|last2=Ledyaev |first3=E.D.|last3=Sontag |first4=A.I.|last4=Subbotin |title=Asymptotic controllability implies feedback stabilization |journal=IEEE Trans. Autom. Control|volume=42 |issue=10 |year=1997 |pages=1394–1407|doi=10.1109/9.633828 }}</ref>
▲The last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy to zero, that is to bring the system to a stop. This is made rigorous by the following result:
=== Constructing the Stabilizing Input ===
▲'''Artstein's theorem.''' The dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').
It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system ({{EquationNote|2}}), ''Sontag's formula'' (or ''Sontag's universal formula'') gives the feedback law <math>k : \mathbb{R}^n \to \mathbb{R}^m</math> directly in terms of the derivatives of the CLF.<ref name="Sontag (1998)"/>{{rp|Eq. 5.56
}} In the special case of a single input system <math>(m=1)</math>, Sontag's formula is written as
:<math>k(x) = \begin{cases} \displaystyle -\frac{L_{f} V(x)+\sqrt{\left[L_{f} V(x)\right]^{2}+\left[L_{g} V(x)\right]^{4}}}{L_{g} V(x)} & \text { if } L_{g} V(x) \neq 0 \\
0 & \text { if } L_{g} V(x)=0 \end{cases} </math>
where <math>L_f V(x) := \langle \nabla V(x), f(x)\rangle</math> and <math>L_g V(x) := \langle \nabla V(x), g(x)\rangle</math> are the [[Lie derivative|Lie derivatives]] of <math>V</math> along <math>f</math> and <math>g</math>, respectively.
:<math>
u^*(x) = \underset{u}{\operatorname{arg\
</math>
for each state ''x''.
▲The theory and application of control-Lyapunov functions were developed by Z. Artstein and [[Eduardo D. Sontag|E. D. Sontag]] in the 1980s and 1990s.
==Example==
Here is a characteristic example of applying a Lyapunov candidate function to a control problem.
Line 36 ⟶ 74:
A Control-Lyapunov candidate is then
:<math>
r \mapsto V(r) :=\frac{1}{2}r^2
</math>
which is positive
Now taking the time derivative of <math>V</math>
Line 64 ⟶ 102:
which upon substitution of the dynamics, <math>\ddot{q}</math>, gives
:<math>
\left(\ddot{q}_d-\frac{u-K_0q-K_1q^3-b\dot{q}}{m(1+q^2)}+\alpha \dot{e}\right) = -\frac{\kappa}{2}(\dot{e}+\alpha e)
</math>
Solving for <math>u</math> yields the control law
:<math>
u= m(1+q^2)\left(\ddot{q}_d + \alpha \dot{e}+\frac{\kappa}{2}r
</math>
with <math>\kappa</math> and <math>\alpha</math>, both greater than zero, as tunable parameters
Line 78 ⟶ 116:
which is a linear first order differential equation which has solution
:<math>
V=V(0)
</math>
Line 92 ⟶ 130:
</math>
:<math>
r=r(0)
</math>
:<math>
\dot{e}+\alpha e= (\dot{e}(0)+\alpha e(0))
</math>
which can then be solved using any linear differential equation methods.
==
{{Reflist}}
==See also==
* [[Artstein's theorem]]
|