Abstract
Distributed representations of words have been shown to capture lexical semantics, as demonstrated by their effectiveness in word similarity and analogical relation tasks. But, these tasks only evaluate lexical semantics indirectly. In this paper, we study whether it is possible to utilize distributed representations to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics. We introduce definition modeling, the task of generating a definition for a given word and its embedding. We present several definition model architectures based on recurrent neural networks, and experiment with the models over multiple data sets. Our results show that a model that controls dependencies between the word being defined and the definition words performs significantly better, and that a characterlevel convolution layer designed to leverage morphology can complement word-level embeddings. Finally, an error analysis suggests that the errors made by a definition model may provide insight into the shortcomings of word embeddings.
Original language | English (US) |
---|---|
Pages | 3259-3266 |
Number of pages | 8 |
State | Published - 2017 |
Event | 31st AAAI Conference on Artificial Intelligence, AAAI 2017 - San Francisco, United States Duration: Feb 4 2017 → Feb 10 2017 |
Other
Other | 31st AAAI Conference on Artificial Intelligence, AAAI 2017 |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 2/4/17 → 2/10/17 |
ASJC Scopus subject areas
- Artificial Intelligence