Control-Lyapunov function: Difference between revisions

Content deleted Content added
Changed the exponential "e" to "exp" due to potential confusion of the exponential constant and the error variable.
Changed \mathbf{R} to \mathbb{R} and fixed the arg min issue
Line 5:
\dot{x}=f(x,u)
</math>
where <math>x\in\mathbfmathbb{R}^n</math> is the state vector and <math>u\in\mathbfmathbb{R}^m</math> is the control vector, and we want to feedback stabilize it to <math>x=0</math> in some ___domain <math>D\subset\mathbfmathbb{R}^n</math>.
 
'''Definition.''' A control-Lyapunov function is a function <math>V:D\rightarrow\mathbfmathbb{R}</math> that is continuously differentiable, positive-definite (that is <math>V(x)</math> is positive except at <math>x=0</math> where it is zero), and such that
:<math>
\forall x \ne 0, \exists u \qquad \dot{V}(x,u)=\nabla V(x) \cdot f(x,u) < 0.
Line 18:
It may not be easy to find a control-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear [[optimization (mathematics)|programming problem]]
:<math>
u^*(x) = \underset{u}{\operatorname*{argminarg\,min}}_u \nabla V(x) \cdot f(x,u)
</math>
for each state ''x''.