Some effects of a reduced relational vocabulary on the Whodunit problem

Daniel T. Halstead, Kenneth D Forbus

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations


A key issue in artificial intelligence lies in finding the amount of input detail needed to do successful learning. Too much detail causes overhead and makes learning prone to over-fitting. Too little detail and it may not be possible to learn anything at all. The issue is particularly relevant when the inputs are relational case descriptions, and a very expressive vocabulary may also lead to inconsistent representations. For example, in the Whodunit Problem, the task is to form hypotheses about the identity of the perpetrator of an event described using relational propositions. The training data consists of arbitrary relational descriptions of many other similar cases. In this paper, we examine the possibility of translating the case descriptions into an alternative vocabulary which has a reduced number of predicates and therefore produces more consistent case descriptions. We compare how the reduced vocabulary affects three different learning algorithms: exemplar-based analogy, prototype-based analogy, and association rule learning. We find that it has a positive effect on some algorithms and a negative effect on others, which gives us insight into all three algorithms and indicates when reduced vocabularies might be appropriate.

Original languageEnglish (US)
Pages (from-to)411-416
Number of pages6
JournalIJCAI International Joint Conference on Artificial Intelligence
StatePublished - Dec 1 2007
Event20th International Joint Conference on Artificial Intelligence, IJCAI 2007 - Hyderabad, India
Duration: Jan 6 2007Jan 12 2007

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Some effects of a reduced relational vocabulary on the Whodunit problem'. Together they form a unique fingerprint.

Cite this