Learning to gesture

Applying appropriate animations to spoken text

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a machine learning system that learns to choose human gestures to accompany novel text. The system is trained on scripts comprised of speech and animations that were hand-coded by professional animators and shipped in video games. We treat this as a text-classification problem, classifying speech as corresponding with specific classes of gestures. We have built and tested two separate classifiers. The first is trained simply on the frequencies of different animations in the corpus. The second extracts text features from each script, and maps these features to the gestures that accompany the script. We have experimented with using a number of features of the text, including n-grams, emotional valence of the text, and parts-of-speech. Using a nave Bayes classifier, the system learns to associate these features with appropriate classes of gestures. Once trained, the system can be given novel text for which it will attempt to assign appropriate gestures. We examine the performance of the two classifiers by using n-fold cross-validation over our training data, as well as two user studies of subjective evaluation of the results. Although there are many possible applications of automated gesture assignment, we hope to apply this technique to a system that produces an automated news show.

Original languageEnglish (US)
Title of host publicationProceedings of the Fifteenth ACM International Conference on Multimedia, MM'07
Pages827-830
Number of pages4
DOIs
StatePublished - Dec 1 2007
Event15th ACM International Conference on Multimedia, MM'07 - Augsburg, Bavaria, Germany
Duration: Sep 24 2007Sep 29 2007

Other

Other15th ACM International Conference on Multimedia, MM'07
CountryGermany
CityAugsburg, Bavaria
Period9/24/079/29/07

Fingerprint

Animation
Classifiers
Learning systems

Keywords

  • Animation
  • Gestures
  • Machine learning
  • Nave bayes

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Nichols, N., Liu, J., Pardo, B. A., Hammond, K. J., & Birnbaum, L. A. (2007). Learning to gesture: Applying appropriate animations to spoken text. In Proceedings of the Fifteenth ACM International Conference on Multimedia, MM'07 (pp. 827-830) https://doi.org/10.1145/1291233.1291421
Nichols, Nathan ; Liu, Jiahui ; Pardo, Bryan A ; Hammond, Kristian J ; Birnbaum, Lawrence A. / Learning to gesture : Applying appropriate animations to spoken text. Proceedings of the Fifteenth ACM International Conference on Multimedia, MM'07. 2007. pp. 827-830
@inproceedings{928c4f8100ed4ef2abd9c265f6f3b3ef,
title = "Learning to gesture: Applying appropriate animations to spoken text",
abstract = "We propose a machine learning system that learns to choose human gestures to accompany novel text. The system is trained on scripts comprised of speech and animations that were hand-coded by professional animators and shipped in video games. We treat this as a text-classification problem, classifying speech as corresponding with specific classes of gestures. We have built and tested two separate classifiers. The first is trained simply on the frequencies of different animations in the corpus. The second extracts text features from each script, and maps these features to the gestures that accompany the script. We have experimented with using a number of features of the text, including n-grams, emotional valence of the text, and parts-of-speech. Using a nave Bayes classifier, the system learns to associate these features with appropriate classes of gestures. Once trained, the system can be given novel text for which it will attempt to assign appropriate gestures. We examine the performance of the two classifiers by using n-fold cross-validation over our training data, as well as two user studies of subjective evaluation of the results. Although there are many possible applications of automated gesture assignment, we hope to apply this technique to a system that produces an automated news show.",
keywords = "Animation, Gestures, Machine learning, Nave bayes",
author = "Nathan Nichols and Jiahui Liu and Pardo, {Bryan A} and Hammond, {Kristian J} and Birnbaum, {Lawrence A}",
year = "2007",
month = "12",
day = "1",
doi = "10.1145/1291233.1291421",
language = "English (US)",
isbn = "9781595937025",
pages = "827--830",
booktitle = "Proceedings of the Fifteenth ACM International Conference on Multimedia, MM'07",

}

Nichols, N, Liu, J, Pardo, BA, Hammond, KJ & Birnbaum, LA 2007, Learning to gesture: Applying appropriate animations to spoken text. in Proceedings of the Fifteenth ACM International Conference on Multimedia, MM'07. pp. 827-830, 15th ACM International Conference on Multimedia, MM'07, Augsburg, Bavaria, Germany, 9/24/07. https://doi.org/10.1145/1291233.1291421

Learning to gesture : Applying appropriate animations to spoken text. / Nichols, Nathan; Liu, Jiahui; Pardo, Bryan A; Hammond, Kristian J; Birnbaum, Lawrence A.

Proceedings of the Fifteenth ACM International Conference on Multimedia, MM'07. 2007. p. 827-830.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Learning to gesture

T2 - Applying appropriate animations to spoken text

AU - Nichols, Nathan

AU - Liu, Jiahui

AU - Pardo, Bryan A

AU - Hammond, Kristian J

AU - Birnbaum, Lawrence A

PY - 2007/12/1

Y1 - 2007/12/1

N2 - We propose a machine learning system that learns to choose human gestures to accompany novel text. The system is trained on scripts comprised of speech and animations that were hand-coded by professional animators and shipped in video games. We treat this as a text-classification problem, classifying speech as corresponding with specific classes of gestures. We have built and tested two separate classifiers. The first is trained simply on the frequencies of different animations in the corpus. The second extracts text features from each script, and maps these features to the gestures that accompany the script. We have experimented with using a number of features of the text, including n-grams, emotional valence of the text, and parts-of-speech. Using a nave Bayes classifier, the system learns to associate these features with appropriate classes of gestures. Once trained, the system can be given novel text for which it will attempt to assign appropriate gestures. We examine the performance of the two classifiers by using n-fold cross-validation over our training data, as well as two user studies of subjective evaluation of the results. Although there are many possible applications of automated gesture assignment, we hope to apply this technique to a system that produces an automated news show.

AB - We propose a machine learning system that learns to choose human gestures to accompany novel text. The system is trained on scripts comprised of speech and animations that were hand-coded by professional animators and shipped in video games. We treat this as a text-classification problem, classifying speech as corresponding with specific classes of gestures. We have built and tested two separate classifiers. The first is trained simply on the frequencies of different animations in the corpus. The second extracts text features from each script, and maps these features to the gestures that accompany the script. We have experimented with using a number of features of the text, including n-grams, emotional valence of the text, and parts-of-speech. Using a nave Bayes classifier, the system learns to associate these features with appropriate classes of gestures. Once trained, the system can be given novel text for which it will attempt to assign appropriate gestures. We examine the performance of the two classifiers by using n-fold cross-validation over our training data, as well as two user studies of subjective evaluation of the results. Although there are many possible applications of automated gesture assignment, we hope to apply this technique to a system that produces an automated news show.

KW - Animation

KW - Gestures

KW - Machine learning

KW - Nave bayes

UR - http://www.scopus.com/inward/record.url?scp=37849024055&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=37849024055&partnerID=8YFLogxK

U2 - 10.1145/1291233.1291421

DO - 10.1145/1291233.1291421

M3 - Conference contribution

SN - 9781595937025

SP - 827

EP - 830

BT - Proceedings of the Fifteenth ACM International Conference on Multimedia, MM'07

ER -

Nichols N, Liu J, Pardo BA, Hammond KJ, Birnbaum LA. Learning to gesture: Applying appropriate animations to spoken text. In Proceedings of the Fifteenth ACM International Conference on Multimedia, MM'07. 2007. p. 827-830 https://doi.org/10.1145/1291233.1291421