Learning Memory-Efficient Stable Linear Dynamical Systems for Prediction and Control

Giorgos Mamakoukas, Orest Xherija, Todd Murphey

Research output: Contribution to journalArticlepeer-review

Abstract

Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-of-magnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory-efficient, with an O(n2) space complexity compared to O(n4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Jun 6 2020

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Learning Memory-Efficient Stable Linear Dynamical Systems for Prediction and Control'. Together they form a unique fingerprint.

Cite this