Spoken language acquisition via human-robot interaction

Qiong Liu*, Thomas Huang, Ying Wu, Stephen Levinson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a subproject of a challenging project that explores teaching a computer human-intelligence. In the subproject, a multisensory mobile robot is used as the interface for human-computer interaction, and spoken language is taught to the computer through natural human-robot interaction. Different from state-of-The-art speech recognizers, our approach associates speech patterns directly with sensory inputs of the robot. This approach allows our system to learn multilingual speech patterns online. Further investigation of this project will include human-computer interaction that involves more modalities, and applications that use the proposed idea to train home appliances.

Original languageEnglish (US)
Title of host publicationProceedings - IEEE International Conference on Multimedia and Expo
PublisherIEEE Computer Society
Pages913-916
Number of pages4
ISBN (Electronic)0769511988
DOIs
StatePublished - 2001
Event2001 IEEE International Conference on Multimedia and Expo, ICME 2001 - Tokyo, Japan
Duration: Aug 22 2001Aug 25 2001

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X

Other

Other2001 IEEE International Conference on Multimedia and Expo, ICME 2001
Country/TerritoryJapan
CityTokyo
Period8/22/018/25/01

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Spoken language acquisition via human-robot interaction'. Together they form a unique fingerprint.

Cite this