Automatic facial expression recognition using facial animation parameters and multistream HMMs

P. S. Aleksic*, A. K. Katsaggelos

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

191 Scopus citations

Abstract

The performance of an automatic facial expression recognition system can be significantly improved by modeling the reliability of different streams of facial expression information utilizing multistream hidden Markov models (HMMs). In this paper, we present an automatic multistream HMM facial expression recognition system and analyze its performance. The proposed system utilizes facial animation parameters (FAPs), supported by the MPEG-4 standard, as features for facial expression classification. Specifically, the FAPs describing the movement of the outer-lip contours and eyebrows are used as observations. Experiments are first performed employing single-stream HMMs under several different scenarios, utilizing outer-lip and eyebrow FAPs individually and jointly. A multistream HMM approach is proposed for introducing facial expression and FAP group dependent stream reliability weights. The stream weights are determined based on the facial expression recognition results obtained when FAP streams are utilized individually. The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.

Original languageEnglish (US)
Pages (from-to)3-11
Number of pages9
JournalIEEE Transactions on Information Forensics and Security
Volume1
Issue number1
DOIs
StatePublished - Mar 2006

Keywords

  • Automatic facial expression recognition
  • Facial animation parameters
  • Hidden Markov models
  • MPEG-4 standards
  • Multistream HMM

ASJC Scopus subject areas

  • Safety, Risk, Reliability and Quality
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Automatic facial expression recognition using facial animation parameters and multistream HMMs'. Together they form a unique fingerprint.

Cite this