Autonomous visual rendering using physical motion

Ahalya Prabhakar, Anastasia Mavrommati, Jarvis Schultz, Todd D. Murphey

Research output: Contribution to journalArticlepeer-review

Abstract

This paper addresses the problem of enabling a robot to repre-sent and recreate visual information through physical motion, focusing on drawing using pens, brushes, or other tools. This work uses ergodicity as a control objective that translates planar visual input to physical motion without preprocessing (e.g., image processing, motion primitives). We achieve comparable results to existing drawing methods, while reducing the algorithmic complexity of the software. We demonstrate that optimal ergodic control algorithms with different time-horizon characteristics (in-finitesimal, finite, and receding horizon) can generate qualitatively and stylistically different motions that render a wide range of visual infor-mation (e.g., letters, portraits, landscapes). In addition, we show that ergodic control enables the same software design to apply to multiple robotic systems by incorporating their particular dynamics, thereby re-ducing the dependence on task-specific robots. Finally, we demonstrate physical drawings with the Baxter robot.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Sep 8 2017

Keywords

  • Automation
  • Motion control
  • Robot art

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Autonomous visual rendering using physical motion'. Together they form a unique fingerprint.

Cite this