Control-Lyapunov function: Difference between revisions

Content deleted Content added
Monkbot (talk | contribs)
The control Lyapunov function (CLF) is a function of x, i.e. V(x) not V(x,u).
Line 1:
In [[control theory]], a '''control-Lyapunov function''' <math>V(x,u)</math> <ref>Freeman (46)</ref> is a generalization of the notion of [[Lyapunov function]] <math>V(x)</math> usedfor ina [[Lyapunovsystem stability|stability]]with control analysisinputs. The ordinary Lyapunov function is used to test whether a [[dynamical system]] is ''stable'' (more restrictively, ''asymptotically stable''). That is, whether the system starting in a state <math>x \ne 0</math> in some ___domain ''D'' will remain in ''D'', or for ''asymptotic stability'' will eventually return to <math>x = 0</math>. The control-Lyapunov function is used to test whether a system is ''feedback stabilizable'', that is whether for any state ''x'' there exists a control <math> u(x,t)</math> such that the system can be brought to the zero state by applying the control ''u''.
 
More formally, suppose we are given aan autonomous dynamical system
:<math>
\dot{x}(t)=f(x(t))+g(x(t))\, u(t),
</math>
where <math>x\in\mathbf{R}^n</math> is the state vector and <math>u\in\mathbf{R}^m</math> is the control vector, and we want to feedback stabilize it to <math>x=0</math> in some ___domain <math>D\subset\mathbf{R}^n</math>.
where the state ''x''(''t'') and the control ''u''(''t'') are vectors.
 
'''Definition.''' A control-Lyapunov function is a function <math>V(x,u):D\rightarrow\mathbf{R}</math> that is continuouscontinuously differentiable, positive-definite (that is <math>V(x,u)</math> is positive except at <math>x=0</math> where it is zero), proper (that is <math>V(x)\to \infty</math> as <math>|x|\to \infty</math>), and such that
:<math>
\forall x \ne 0, \exists u \qquad \dot{V}(x,u)=\nabla V(x) \cdot f(x,u) < 0.
</math>
 
Line 18:
It may not be easy to find a control-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear [[optimization (mathematics)|programming problem]]
:<math>
u^*(x) = \arg\min_u \nabla V(x,u) \cdot f(x,u)
</math>
for each state ''x''.