What's there to talk about? A multi-modal model of referring behavior in the presence of shared visual information

Darren Gergle*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper describes the development of a rule-based computational model that describes how a feature-based representation of shared visual information combines with linguistic cues to enable effective reference resolution. This work explores a language-only model, a visual-only model, and an integrated model of reference resolution and applies them to a corpus of transcribed task-oriented spoken dialogues. Preliminary results from a corpus-based analysis suggest that integrating information from a shared visual environment can improve the performance and quality of existing discourse-based models of reference resolution.

Original languageEnglish (US)
Title of host publicationEACL 2006 - 11th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference
Pages7-14
Number of pages8
StatePublished - 2006
Event11th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2006 - Trento, Italy
Duration: Apr 3 2006Apr 7 2006

Publication series

NameEACL 2006 - 11th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference

Other

Other11th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2006
Country/TerritoryItaly
CityTrento
Period4/3/064/7/06

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'What's there to talk about? A multi-modal model of referring behavior in the presence of shared visual information'. Together they form a unique fingerprint.

Cite this