TY - CHAP
T1 - Autonomous Visual Rendering using Physical Motion
AU - Prabhakar, Ahalya
AU - Mavrommati, Anastasia
AU - Schultz, Jarvis
AU - Murphey, Todd D.
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation under grants CMMI 1334609 and IIS 1426961. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - This paper addresses the problem of enabling a robot to represent and recreate visual information through physical motion, focusing on drawing using pens, brushes, or other tools. This work uses ergodicity as a control objective that translates planar visual input to physical motion without preprocessing (e.g., image processing, motion primitives). We achieve comparable results to existing drawing methods, while reducing the algorithmic complexity of the software. We demonstrate that optimal ergodic control algorithms with different time-horizon characteristics (infinitesimal, finite, and receding horizon) can generate qualitatively and stylistically different motions that render a wide range of visual information (e.g., letters, portraits, landscapes). In addition, we show that ergodic control enables the same software design to apply to multiple robotic systems by incorporating their particular dynamics, thereby reducing the dependence on task-specific robots. Finally, we demonstrate physical drawings with the Baxter robot.
AB - This paper addresses the problem of enabling a robot to represent and recreate visual information through physical motion, focusing on drawing using pens, brushes, or other tools. This work uses ergodicity as a control objective that translates planar visual input to physical motion without preprocessing (e.g., image processing, motion primitives). We achieve comparable results to existing drawing methods, while reducing the algorithmic complexity of the software. We demonstrate that optimal ergodic control algorithms with different time-horizon characteristics (infinitesimal, finite, and receding horizon) can generate qualitatively and stylistically different motions that render a wide range of visual information (e.g., letters, portraits, landscapes). In addition, we show that ergodic control enables the same software design to apply to multiple robotic systems by incorporating their particular dynamics, thereby reducing the dependence on task-specific robots. Finally, we demonstrate physical drawings with the Baxter robot.
KW - Automation
KW - Motion control
KW - Robot art
UR - http://www.scopus.com/inward/record.url?scp=85107060612&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85107060612&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-43089-4_6
DO - 10.1007/978-3-030-43089-4_6
M3 - Chapter
AN - SCOPUS:85107060612
T3 - Springer Proceedings in Advanced Robotics
SP - 80
EP - 95
BT - Springer Proceedings in Advanced Robotics
PB - Springer Science and Business Media B.V.
ER -