Content deleted Content added
m Cleanup grammar |
Reorganize article. Add Sontag's formula for m=1 case. Add links to other articles. |
||
Line 1:
In [[control theory]], a '''control-Lyapunov function (
The theory and application of control-Lyapunov functions were developed by
More formally, suppose we are given an autonomous dynamical system with inputs▼
where <math>x\in\mathbb{R}^n</math> is the state vector and <math>u\in\mathbb{R}^m</math> is the control vector, and we want to drive states to an equilibrium, let us <math>x=0</math>, from every initial state in some ___domain <math>D\subset\mathbb{R}^n</math>.▼
==Definition==
[[Eduardo D. Sontag|E. D. Sontag]] showed that the existence of a continuous cLf is equivalent to asymptotic stabilizability.<ref>{{cite journal |first=E.D. |last=Sontag |title=A Lyapunov-like characterization of asymptotic controllability|journal=SIAM J. Control Optim.|volume=21 |issue=3 |year=1983 |pages=462–471}}</ref> It was later shown that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.<ref>{{cite journal |first=F.H.|last=Clarke |first2=Y.S.|last2=Ledyaev |first3=E.D.|last3=Sontag |first4=A.I.|last4=Subbotin |title=Asymptotic controllability implies feedback stabilization |journal=IEEE Trans. Autom. Control|volume=42 |issue=10 |year=1997 |pages=1394–1407}}</ref> One may also ask when there is a continuous feedback stabilizer. For systems affine on controls, and differentiable cLf's, the definition translates as follows:▼
▲
'''Definition.''' A control-Lyapunov function is a function <math>V:D\rightarrow\mathbb{R}</math> that is continuously differentiable, positive-definite (that is, <math>V(x)</math> is positive except at <math>x=0</math> where it is zero), and such that▼
{{NumBlk|:|<math>\dot{x}=f(x,u)</math>|{{EquationRef|1}}}}
▲where <math>x\in\mathbb{R}^n</math> is the state vector and <math>u\in\mathbb{R}^m</math> is the control vector
▲'''Definition.'''
:<math>
</math>
where <math>\langle u, v\rangle</math> denotes the [[inner product]] of <math>u, v \in\mathbb{R}^n</math>.
The last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by [[Artstein's theorem]]
Some results apply only to control-affine systems—i.e., control systems in the following form:
'''Artstein's theorem.''' The dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').▼
{{NumBlk|:|<math>\dot x = f(x) + \sum_{i=1}^m g_i(x)u_i</math>|{{EquationRef|2}}}}
where <math>f : \mathbb{R}^n \to \mathbb{R}^n</math> and <math>g_i : \mathbb{R}^n \to \mathbb{R}^{n}</math> for <math>i = 1, \dots, m</math>.
==Theorems==
▲
▲
=== Constructing the Stabilizing Input ===
It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system ({{EquationNote|2}}), ''Sontag's formula'' (or ''Sontag's universal formula'') gives the feedback law <math>k : \mathbb{R}^n \to \mathbb{R}^m</math> directly in terms of the derivatives of the CLF.<ref>Sontag (1998). ''Mathematical Control Theory'', Equation 5.56</ref> In the special case of a single input system <math>(m=1)</math>, Sontag's formula is written as
:<math>k(x) = \begin{cases} \displaystyle -\frac{L_{f} V(x)+\sqrt{\left[L_{f} V(x)\right]^{2}+\left[L_{g} V(x)\right]^{4}}}{L_{g} V(x)} & \text { if } L_{g} V(x) \neq 0 \\
0 & \text { if } L_{g} V(x)=0 \end{cases} </math>
where <math>L_f V(x) := \langle \nabla V(x), f(x)\rangle</math> and <math>L_g V(x) := \langle \nabla V(x), g(x)\rangle</math> are the [[Lie derivative|Lie derivatives]] of <math>V</math> along <math>f</math> and <math>g</math>, respectively.
For the general nonlinear system ({{EquationNote|1}}), the input <math>u</math> can be found by solving a static non-linear [[optimization (mathematics)|programming problem]]
:<math>
u^*(x) = \underset{u}{\operatorname{arg\,min}} \nabla V(x) \cdot f(x,u)
</math>
for each state ''x''.
▲The theory and application of control-Lyapunov functions were developed by Z. Artstein and [[Eduardo D. Sontag|E. D. Sontag]] in the 1980s and 1990s.
==Example==
|