Memory-efficient learning of stable linear dynamical systems for prediction and control

Giorgos Mamakoukas, Orest Xherija, Todd Murphey

Research output: Contribution to journalConference articlepeer-review

18 Scopus citations

Abstract

Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach—in contrast to current methods for learning stable LDSs—updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-of-magnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory efficient, with an O(n2) space complexity compared to O(n4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https://github.com/giorgosmamakoukas/MemoryEfficientStableLDS.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

Funding

First and foremost, we thank Nicolas Gillis for the communication and useful discussions about the fast gradient method. We also thank Ian Abraham for his help with the experimental testing on the Franka Emika Panda robot and Wenbing Huang for very kindly providing us with the datasets and results used previously to test the WLS algorithm. We also thank the anonymous reviewers for their invaluable comments that helped improve the quality of this manuscript. Last, we gratefully acknowledge the Research Computing Center (RCC) of the University of Chicago for providing the computing resources to execute our experiments and simulations. This work is supported by the National Science Foundation (IIS-1717951). Any opinions, findings, and conclusions or recommendations expressed in this material are solely those of the author(s) and do not necessarily reflect the views of any of the funding agencies or organizations.

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Memory-efficient learning of stable linear dynamical systems for prediction and control'. Together they form a unique fingerprint.

Cite this