TY - GEN
T1 - Stabilizing environments to facilitate planning and activity
T2 - 9th National Conference on Artificial Intelligence, AAAI 1991
AU - Hammond, Kristian J.
AU - Converse, Timothy M.
N1 - Funding Information:
An agent has to confront many different sorts of worlds while planning and executing his actions. Some are the harsh and dangerous worlds. Others are safe and benign. The former force an agent to reason deeply about its plans and be cautious in its actions. The latter allow an agent flexibility of thought and ease of action. Some of these worlds are malicious. A single wrong action can lead to failure or damage. A misplaced step can cause disastrous effects. In such worlds, the margin of error that an agent is allowed is narrow and drops off steeply at the edges. Some of these worlds are complex. The future in such worlds is difficult to project simply because of the raw number of interacting features that have to accounted for. The difficulty in dealing with these worlds is that they do not allow a planner to build and execute plans that scope into any but the most immediate future. Some of these worlds are uncertain with respect to an agent’s ability to perceive. As such, they do not *This work was supported in part by the Defense Advanced Research Projects Agency, monitored by the Air Force Office of Scientific Research under contract F49620- 88-C-0058t,h e Office of Naval Research under contracts N0014-85-K-010 and N00014-91-J-1185, and the Air Force Office of Scientific Research under contract 91-0112.
Publisher Copyright:
© 1991, AAAI (www.aaai.org). All rights reserved.
PY - 1991
Y1 - 1991
N2 - An underlying assumption of research on learning from planning and activity is that agents can exploit regularities they find in the world. For agents that interact with a world over an extended period of time, there is another possibility: the exploited regularities can be created and maintained, rather than discovered. We explore the ways in which agents can actively stabilize the world to increase the predictability and tractability of acting within in it.
AB - An underlying assumption of research on learning from planning and activity is that agents can exploit regularities they find in the world. For agents that interact with a world over an extended period of time, there is another possibility: the exploited regularities can be created and maintained, rather than discovered. We explore the ways in which agents can actively stabilize the world to increase the predictability and tractability of acting within in it.
UR - http://www.scopus.com/inward/record.url?scp=3042926136&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=3042926136&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:3042926136
T3 - Proceedings of the 9th National Conference on Artificial Intelligence, AAAI 1991
SP - 787
EP - 793
BT - Proceedings of the 9th National Conference on Artificial Intelligence, AAAI 1991
PB - AAAI Press
Y2 - 14 July 1991 through 19 July 1991
ER -