Experimental applications of the koopman operator in active learning for control

Thomas A. Berrueta*, Ian Abraham, Todd Murphey

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter

5 Scopus citations

Abstract

Experimentation as a setting for learning demands adaptability on the part of the decision-making system. It is typically infeasible for agents to have complete a priori knowledge of an environment, their own dynamics, or the behavior of other agents. In order to achieve autonomy in robotic applications, learning must occur incrementally, and ideally as a function of decision-making by exploiting the underlying control system. Most artificial intelligence techniques are ill suited for experimental settings because they either lack the ability to learn incrementally or do not have information measures with which to guide their learning. This chapter examines the Koopman operator, its application in active learning, and its relationship to alternative learning techniques, such as Gaussian processes and kernel ridge regression. Additionally, examples are provided from a variety of experimental applications of the Koopman operator in active learning settings.

Original languageEnglish (US)
Title of host publicationLecture Notes in Control and Information Sciences
PublisherSpringer
Pages421-450
Number of pages30
DOIs
StatePublished - 2020

Publication series

NameLecture Notes in Control and Information Sciences
Volume484
ISSN (Print)0170-8643
ISSN (Electronic)1610-7411

ASJC Scopus subject areas

  • Library and Information Sciences

Fingerprint

Dive into the research topics of 'Experimental applications of the koopman operator in active learning for control'. Together they form a unique fingerprint.

Cite this