Control-Lyapunov function: Difference between revisions

Content deleted Content added
Add some equations
formatting, remove {{context}} (intro seems fine to me)
Line 1:
In [[control theory]], a '''control-Lyapunov function''' <math>V(x,u)</math> is a generalization of the notion of [[Lyapunov function]] <math>V(x)</math> used in [[Lyapunov stability|stability]] analysis. The ordinary Lyapunov function is used to test whether a [[dynamical system]] is ''stable'', that is whether the system started in a state <math>x \ne 0</math> will eventually return to <math>x = 0</math>. The control-Lyapunov function is used to test whether a system is ''feedback stabilizable'', that is whether for any state ''x'' there exists a control <math> u(x,t)</math> such that the system can be brought to the zero state by applying the control ''u''.
{{context}}
In [[control theory]] a '''control-Lyapunov function''' <math>V(x,u)</math> is a generalization of the notion of [[Lyapunov function]] <math>V(x)</math> used in [[Lyapunov stability|stability]] analysis. The ordinary Lyapunov function is used to test whether a [[dynamical system]] is ''stable'', that is whether the system started in a state <math>x \ne 0</math> will eventually return to <math>x = 0</math>. The control-Lyapunov function is used to test whether a system is ''feedback stabilizable'', that is whether for any state x there exists a control <math> u(x,t)</math> such that the system can be brought to the zero state by applying the control u.
 
More formally, suppose we are given a dynamical system
:<math>
\dot{x}(t)=f(x(t))+g(x(t))\, u(t),
</math>
where the state ''x''(''t'') and the control ''u''(''t'') are vectors.
 
'''Definition.''' A control-Lyapunov function is a function <math>V(x,u)</math> that is continuous, positive-definite (that is V(x,u) is positive except at <math>x=0</math> where it is zero), proper (that is <math>V(x)\to \infty</math> as <math>|x|\to \infty</math>), and such that
:<math>
\forall x \ne 0, \exists u \qquad \dot{V}(x,u) < 0.
</math>
(this is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V). Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy to zero, that is to bring the system to a stop. This is made rigorous by the following result:
 
(thisThe last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V)''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy to zero, that is to bring the system to a stop. This is made rigorous by the following result:
Artstein's theorem. The dynamical system has a differentiable clf if and only if there exists a regular stabilizing feedback u(x).
 
'''Artstein's theorem.''' The dynamical system has a differentiable clfcontrol-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').
It may not be easy to find a clf for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear programming problem
 
It may not be easy to find a clfcontrol-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear [[optimization (mathematics)|programming problem]]
:<math>
u^*(x) = \arg\min_u \nabla V(x,u) \cdot f(x,u)
</math>
for each state ''x''.
The theory and application of CLF'scontrol-Lyapunov functions were developed by Z. Artstein and [[Eduardo D. Sontag|E. D. Sontag]] in the 1980's1980s and 1990's1990s.
 
[[Category:Control theory]]