Abstract
One of the central issues in cognitive science is the nature of human representations. We argue that symbolic representations are essential for capturing human cognitive capabilities. We start by examining some common misconceptions found in discussions of representations and models. Next we examine evidence that symbolic representations are essential for capturing human cognitive capabilities, drawing on the analogy literature. Then we examine fundamental limitations of feature vectors and other distributed representations that, despite their recent successes on various practical problems, suggest that they are insufficient to capture many aspects of human cognition. After that, we describe the implications for cognitive architecture of our view that analogy is central, and we speculate on roles for hybrid approaches. We close with an analogy that might help bridge the gap.
Original language | English (US) |
---|---|
Pages (from-to) | 694-718 |
Number of pages | 25 |
Journal | Topics in Cognitive Science |
Volume | 9 |
Issue number | 3 |
DOIs | |
State | Published - Jul 2017 |
Funding
This work was supported by the Office of Naval Research, through the Intelligent and Autonomous Systems Program and Socio-Cognitive Architectures Program, as well as by the NSF-Funded Spatial Intelligence and Learning Center (SBE-1041707) and the Air Force Office of Scientific Research (FA2386-10-1-4128). We thank Dedre Gentner, Bryan Pardo, Doug Downey, Michael Witbrock, Peter Norvig, Praveen Paritosh, and Johan de Kleer for useful discussions.
Keywords
- Analogy
- Computational modeling
- Learning
- Machine learning
- Relational representations
- Representation
- Symbolic modeling
ASJC Scopus subject areas
- Experimental and Cognitive Psychology
- Linguistics and Language
- Human-Computer Interaction
- Cognitive Neuroscience
- Artificial Intelligence