Active Learning of Dynamics for Data-Driven Control Using Koopman Operators

Ian Abraham, Todd D. Murphey

Research output: Contribution to journalArticle

2 Scopus citations

Abstract

This paper presents an active learning strategy for robotic systems that takes into account task information, enables fast learning, and allows control to be readily synthesized by taking advantage of the Koopman operator representation. We first motivate the use of representing nonlinear systems as linear Koopman operator systems by illustrating the improved model-based control performance with an actuated Van der Pol system. Information-theoretic methods are then applied to the Koopman operator formulation of dynamical systems where we derive a controller for active learning of robot dynamics. The active learning controller is shown to increase the rate of information about the Koopman operator. In addition, our active learning controller can readily incorporate policies built on the Koopman dynamics, enabling the benefits of fast active learning and improved control. Results using a quadcopter illustrate single-execution active learning and stabilization capabilities during free fall. The results for active learning are extended for automating Koopman observables and we implement our method on real robotic systems.

Original languageEnglish (US)
Article number8759089
Pages (from-to)1071-1083
Number of pages13
JournalIEEE Transactions on Robotics
Volume35
Issue number5
DOIs
StatePublished - Oct 2019

    Fingerprint

Keywords

  • Active learning
  • Koopman operators
  • information theoretic control
  • single execution learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this