Abstract
Transfer learning is the ability to apply previously learned knowledge to new problems or domains. In qualitative reasoning, model formulation is the process of moving from the unruly, broad set of concepts used in everyday life to a concise, formal vocabulary of abstractions, assumptions, causal relationships, and models that support problem-solving. Approaching transfer learning from a model formulation perspective, we found that analogy with examples can be used to learn how to solve AP Physics style problems. We call this process analogical model formulation and implement it in the Companion cognitive architecture. A Companion begins with some basic mathematical skills, a broad common sense ontology, and some qualitative mechanics, but no equations. The Companion uses worked solutions, explanations of example problems at the level of detail appearing in textbooks, to learn what equations are relevant, how to use them, and the assumptions necessary to solve physics problems. We present an experiment, conducted by the Educational Testing Service, demonstrating that analogical model formulation enables a Companion to learn to solve AP Physics style problems. Across six different variations of relationships between base and target problems, or transfer levels, a Companion exhibited a 63% improvement in initial performance. While already a significant result, we describe an in-depth analysis of this experiment to pinpoint the causes of failures. Interestingly, the sources of failures were primarily due to errors in the externally generated problem and worked solution representations as well as some domain-specific problem-solving strategies, not analogical model formulation. To verify this, we describe a second experiment which was performed after fixing these problems. In this second experiment, a Companion achieved a 95.8% improvement in initial performance due to transfer, which is nearly perfect. We know of no other problem-solving experiments which demonstrate performance of analogical learning over systematic variations of relationships between problems at this scale.
Original language | English (US) |
---|---|
Pages (from-to) | 1615-1638 |
Number of pages | 24 |
Journal | Artificial Intelligence |
Volume | 173 |
Issue number | 18 |
DOIs | |
State | Published - Dec 2009 |
Funding
This research was supported by DARPA under the Transfer Learning program. We thank Patrick Kyllonen, Catherine Trapani and Vincent Weng at the Education Testing Service for help in experimental design, generating the testing materials, and administering the evaluation. We thank Cynthia Matuszek, Blake Shepard, and Casey McGinnis at Cycorp for helping work out the representation conventions for problems and worked solutions. We also thank Thomas Hinrichs, Jeff Usher, Praveen Paritosh, and Emmett Tomai for many interesting ideas and their work with us on building the Companion cognitive architecture.
Keywords
- Analogical reasoning
- Case-based reasoning
- Model formulation
- Transfer learning
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language
- Artificial Intelligence