Inverse optimality in robust stabilization

R. A. Freeman*, P. V. Kokotovic

*Corresponding author for this work

Research output: Contribution to journalArticle

276 Scopus citations

Abstract

The concept of a robust control Lyapunov function (rclf) is introduced, and it is shown that the existence of an rclf for a control-affine system is equivalent to robust stabilizability via continuous state feedback. This extends Artstein's theorem on nonlinear stabilizability to systems with disturbances. It is then shown that every rclf satisfies the steady-state Hamilton-Jacobi-Isaacs (HJI) equation associated with a meaningful game and that every member of a class of pointwise min-norm control laws is optimal for such a game. These control laws have desirable properties of optimality and can be computed directly from the rclf without solving the HJI equation for the upper value function.

Original languageEnglish (US)
Pages (from-to)1365-1391
Number of pages27
JournalSIAM Journal on Control and Optimization
Volume34
Issue number4
DOIs
StatePublished - Jan 1 1996

Keywords

  • Control Lyapunov functions
  • Differential games
  • Input-to-state stability
  • Nonlinear systems
  • Robust stabilization

ASJC Scopus subject areas

  • Control and Optimization
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Inverse optimality in robust stabilization'. Together they form a unique fingerprint.

  • Cite this