Analogical Word Sense Disambiguation

D. Barbella, K. Forbus

Research output: Contribution to journalArticlepeer-review

Abstract

Word sense disambiguation is an important problem in learning by reading. This paper introduces analogical word-sense disambiguation, which uses human-like analogical processing over structured, relational representations to perform word sense disambiguation. Cases are automatically constructed using representations produced via natural language analysis of sentences, and include both conceptual and linguistic information. Learning occurs via processing cases with the SAGE model of analogical generalization, which constructs probabilistic relational representations from cases that are sufficiently similar, but also stores outliers. Disambiguation is performed by using analogical retrieval over generalizations and stored examples to provide evidence for new word occurrences based on prior experience. We present experiments demonstrating that analogical word sense disambiguation, using representations that are suitable for learning by reading, yields accuracies comparable to traditional algorithms operating over feature-based representations.
Original languageEnglish
Pages (from-to)297-315
JournalAdvances in Cognitive Systems
Volume2
StatePublished - 2013

Fingerprint

Dive into the research topics of 'Analogical Word Sense Disambiguation'. Together they form a unique fingerprint.

Cite this