Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Control-Lyapunov function

In control theory, a control-Lyapunov function (CLF) extends the concept of a Lyapunov function to systems with control inputs, testing whether a system is asymptotically stabilizable. While a Lyapunov function determines if a dynamical system is stable or asymptotically stable—meaning states remain in a domain or converge to zero, respectively—a CLF assesses if there exists a control input u(x,t) to bring any state x to zero asymptotically. This theory was advanced by Zvi Artstein and Eduardo D. Sontag in the late 20th century, providing fundamental tools for stabilizing controlled systems.

We don't have any images related to Control-Lyapunov function yet.
We don't have any YouTube videos related to Control-Lyapunov function yet.
We don't have any PDF documents related to Control-Lyapunov function yet.
We don't have any Books related to Control-Lyapunov function yet.
We don't have any archived web articles related to Control-Lyapunov function yet.

Definition

Consider an autonomous dynamical system with inputs

x ˙ = f ( x , u ) {\displaystyle {\dot {x}}=f(x,u)} 1

where x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} is the state vector and u ∈ R m {\displaystyle u\in \mathbb {R} ^{m}} is the control vector. Suppose our goal is to drive the system to an equilibrium x ∗ ∈ R n {\displaystyle x_{*}\in \mathbb {R} ^{n}} from every initial state in some domain D ⊂ R n {\displaystyle D\subset \mathbb {R} ^{n}} . Without loss of generality, suppose the equilibrium is at x ∗ = 0 {\displaystyle x_{*}=0} (for an equilibrium x ∗ ≠ 0 {\displaystyle x_{*}\neq 0} , it can be translated to the origin by a change of variables).

Definition. A control-Lyapunov function (CLF) is a function V : D → R {\displaystyle V:D\to \mathbb {R} } that is continuously differentiable, positive-definite (that is, V ( x ) {\displaystyle V(x)} is positive for all x ∈ D {\displaystyle x\in D} except at x = 0 {\displaystyle x=0} where it is zero), and such that for all x ∈ R n ( x ≠ 0 ) , {\displaystyle x\in \mathbb {R} ^{n}(x\neq 0),} there exists u ∈ R m {\displaystyle u\in \mathbb {R} ^{m}} such that

V ˙ ( x , u ) := ⟨ ∇ V ( x ) , f ( x , u ) ⟩ < 0 , {\displaystyle {\dot {V}}(x,u):=\langle \nabla V(x),f(x,u)\rangle <0,}

where ⟨ u , v ⟩ {\displaystyle \langle u,v\rangle } denotes the inner product of u , v ∈ R n {\displaystyle u,v\in \mathbb {R} ^{n}} .

The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by Artstein's theorem.

Some results apply only to control-affine systems—i.e., control systems in the following form:

x ˙ = f ( x ) + ∑ i = 1 m g i ( x ) u i {\displaystyle {\dot {x}}=f(x)+\sum _{i=1}^{m}g_{i}(x)u_{i}} 2

where f : R n → R n {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} and g i : R n → R n {\displaystyle g_{i}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} for i = 1 , … , m {\displaystyle i=1,\dots ,m} .

Theorems

Eduardo Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable.5 It was later shown by Francis H. Clarke, Yuri Ledyaev, Eduardo Sontag, and A.I. Subbotin that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.6 Artstein proved that the dynamical system (2) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

Constructing the Stabilizing Input

It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (2), Sontag's formula (or Sontag's universal formula) gives the feedback law k : R n → R m {\displaystyle k:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} directly in terms of the derivatives of the CLF.7: Eq. 5.56  In the special case of a single input system ( m = 1 ) {\displaystyle (m=1)} , Sontag's formula is written as

k ( x ) = { − L f V ( x ) + [ L f V ( x ) ] 2 + [ L g V ( x ) ] 4 L g V ( x )  if  L g V ( x ) ≠ 0 0  if  L g V ( x ) = 0 {\displaystyle k(x)={\begin{cases}\displaystyle -{\frac {L_{f}V(x)+{\sqrt {\left[L_{f}V(x)\right]^{2}+\left[L_{g}V(x)\right]^{4}}}}{L_{g}V(x)}}&{\text{ if }}L_{g}V(x)\neq 0\\0&{\text{ if }}L_{g}V(x)=0\end{cases}}}

where L f V ( x ) := ⟨ ∇ V ( x ) , f ( x ) ⟩ {\displaystyle L_{f}V(x):=\langle \nabla V(x),f(x)\rangle } and L g V ( x ) := ⟨ ∇ V ( x ) , g ( x ) ⟩ {\displaystyle L_{g}V(x):=\langle \nabla V(x),g(x)\rangle } are the Lie derivatives of V {\displaystyle V} along f {\displaystyle f} and g {\displaystyle g} , respectively.

For the general nonlinear system (1), the input u {\displaystyle u} can be found by solving a static non-linear programming problem

u ∗ ( x ) = a r g m i n u ∇ V ( x ) ⋅ f ( x , u ) {\displaystyle u^{*}(x)={\underset {u}{\operatorname {arg\,min} }}\nabla V(x)\cdot f(x,u)}

for each state x.

Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by

m ( 1 + q 2 ) q ¨ + b q ˙ + K 0 q + K 1 q 3 = u {\displaystyle m(1+q^{2}){\ddot {q}}+b{\dot {q}}+K_{0}q+K_{1}q^{3}=u}

Now given the desired state, q d {\displaystyle q_{d}} , and actual state, q {\displaystyle q} , with error, e = q d − q {\displaystyle e=q_{d}-q} , define a function r {\displaystyle r} as

r = e ˙ + α e {\displaystyle r={\dot {e}}+\alpha e}

A Control-Lyapunov candidate is then

r ↦ V ( r ) := 1 2 r 2 {\displaystyle r\mapsto V(r):={\frac {1}{2}}r^{2}}

which is positive for all r ≠ 0 {\displaystyle r\neq 0} .

Now taking the time derivative of V {\displaystyle V}

V ˙ = r r ˙ {\displaystyle {\dot {V}}=r{\dot {r}}} V ˙ = ( e ˙ + α e ) ( e ¨ + α e ˙ ) {\displaystyle {\dot {V}}=({\dot {e}}+\alpha e)({\ddot {e}}+\alpha {\dot {e}})}

The goal is to get the time derivative to be

V ˙ = − κ V {\displaystyle {\dot {V}}=-\kappa V}

which is globally exponentially stable if V {\displaystyle V} is globally positive definite (which it is).

Hence we want the rightmost bracket of V ˙ {\displaystyle {\dot {V}}} ,

( e ¨ + α e ˙ ) = ( q ¨ d − q ¨ + α e ˙ ) {\displaystyle ({\ddot {e}}+\alpha {\dot {e}})=({\ddot {q}}_{d}-{\ddot {q}}+\alpha {\dot {e}})}

to fulfill the requirement

( q ¨ d − q ¨ + α e ˙ ) = − κ 2 ( e ˙ + α e ) {\displaystyle ({\ddot {q}}_{d}-{\ddot {q}}+\alpha {\dot {e}})=-{\frac {\kappa }{2}}({\dot {e}}+\alpha e)}

which upon substitution of the dynamics, q ¨ {\displaystyle {\ddot {q}}} , gives

( q ¨ d − u − K 0 q − K 1 q 3 − b q ˙ m ( 1 + q 2 ) + α e ˙ ) = − κ 2 ( e ˙ + α e ) {\displaystyle \left({\ddot {q}}_{d}-{\frac {u-K_{0}q-K_{1}q^{3}-b{\dot {q}}}{m(1+q^{2})}}+\alpha {\dot {e}}\right)=-{\frac {\kappa }{2}}({\dot {e}}+\alpha e)}

Solving for u {\displaystyle u} yields the control law

u = m ( 1 + q 2 ) ( q ¨ d + α e ˙ + κ 2 r ) + K 0 q + K 1 q 3 + b q ˙ {\displaystyle u=m(1+q^{2})\left({\ddot {q}}_{d}+\alpha {\dot {e}}+{\frac {\kappa }{2}}r\right)+K_{0}q+K_{1}q^{3}+b{\dot {q}}}

with κ {\displaystyle \kappa } and α {\displaystyle \alpha } , both greater than zero, as tunable parameters

This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected

V ˙ = − κ V {\displaystyle {\dot {V}}=-\kappa V}

which is a linear first order differential equation which has solution

V = V ( 0 ) exp ⁡ ( − κ t ) {\displaystyle V=V(0)\exp(-\kappa t)}

And hence the error and error rate, remembering that V = 1 2 ( e ˙ + α e ) 2 {\displaystyle V={\frac {1}{2}}({\dot {e}}+\alpha e)^{2}} , exponentially decay to zero.

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for V {\displaystyle V} and solve for e {\displaystyle e} . This is left as an exercise for the reader but the first few steps at the solution are:

r r ˙ = − κ 2 r 2 {\displaystyle r{\dot {r}}=-{\frac {\kappa }{2}}r^{2}} r ˙ = − κ 2 r {\displaystyle {\dot {r}}=-{\frac {\kappa }{2}}r} r = r ( 0 ) exp ⁡ ( − κ 2 t ) {\displaystyle r=r(0)\exp \left(-{\frac {\kappa }{2}}t\right)} e ˙ + α e = ( e ˙ ( 0 ) + α e ( 0 ) ) exp ⁡ ( − κ 2 t ) {\displaystyle {\dot {e}}+\alpha e=({\dot {e}}(0)+\alpha e(0))\exp \left(-{\frac {\kappa }{2}}t\right)}

which can then be solved using any linear differential equation methods.

See also

References

  1. Isidori, A. (1995). Nonlinear Control Systems. Springer. ISBN 978-3-540-19916-8. 978-3-540-19916-8

  2. Freeman, Randy A.; Petar V. Kokotović (2008). "Robust Control Lyapunov Functions". Robust Nonlinear Control Design (illustrated, reprint ed.). Birkhäuser. pp. 33–63. doi:10.1007/978-0-8176-4759-9_3. ISBN 978-0-8176-4758-2. Retrieved 2009-03-04. 978-0-8176-4758-2

  3. Khalil, Hassan (2015). Nonlinear Control. Pearson. ISBN 9780133499261. 9780133499261

  4. Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition (PDF). Springer. ISBN 978-0-387-98489-6. 978-0-387-98489-6

  5. Sontag, E.D. (1983). "A Lyapunov-like characterization of asymptotic controllability". SIAM J. Control Optim. 21 (3): 462–471. doi:10.1137/0321028. S2CID 450209. /wiki/Doi_(identifier)

  6. Clarke, F.H.; Ledyaev, Y.S.; Sontag, E.D.; Subbotin, A.I. (1997). "Asymptotic controllability implies feedback stabilization". IEEE Trans. Autom. Control. 42 (10): 1394–1407. doi:10.1109/9.633828. /wiki/Doi_(identifier)

  7. Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition (PDF). Springer. ISBN 978-0-387-98489-6. 978-0-387-98489-6