TY - JOUR
T1 - Explaining and repairing plans that fail
AU - Hammond, Kristian J.
N1 - Funding Information:
* This paper describes work done at the Yale Artificial Intelligence Lab under the direction of Roger Schank. This work was supported by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contracts N00014-82-K-0149, N00014-85-K-0108 and N00014-75-C-1111, NSF grant IST-8120451, and Air Force contract F49620-82-K-0010. It was also supported by the Defense Advanced Research Projects Agency, monitored by the Air Force Office of Scientific Research under contract F49620-88-C-0058, and the Office of Naval Research under contract N0014-85-K-010.
PY - 1990/9
Y1 - 1990/9
N2 - A persistent problem in machine planning is that of repairing plans that fail. One approach to this problem that has been shown to be quite powerful is based on the idea that detailed descriptions of the causes of a failure can be used to decide between the different repairs that can be applied. This paper presents an approach to repair in which plan failures are described in terms of causal explanations of why they occurred. These domain-level explanations are used to access abstract repair strategies, which are then used to make specific changes to the faulty plans. The approach is demonstrated using examples from CHEF, a case-based planner that creates and debugs plans in the domain of Szechwan cooking. While the approach discussed here is examined in terms of actual plan failures, this technique can also be used in the repair of plans that are discovered to be faulty prior to their actual running.
AB - A persistent problem in machine planning is that of repairing plans that fail. One approach to this problem that has been shown to be quite powerful is based on the idea that detailed descriptions of the causes of a failure can be used to decide between the different repairs that can be applied. This paper presents an approach to repair in which plan failures are described in terms of causal explanations of why they occurred. These domain-level explanations are used to access abstract repair strategies, which are then used to make specific changes to the faulty plans. The approach is demonstrated using examples from CHEF, a case-based planner that creates and debugs plans in the domain of Szechwan cooking. While the approach discussed here is examined in terms of actual plan failures, this technique can also be used in the repair of plans that are discovered to be faulty prior to their actual running.
UR - http://www.scopus.com/inward/record.url?scp=0025488956&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0025488956&partnerID=8YFLogxK
U2 - 10.1016/0004-3702(90)90040-7
DO - 10.1016/0004-3702(90)90040-7
M3 - Article
AN - SCOPUS:0025488956
SN - 0004-3702
VL - 45
SP - 173
EP - 228
JO - Artificial Intelligence
JF - Artificial Intelligence
IS - 1-2
ER -