Tracking articulated body by dynamic Markov network

Ying Wu*, Gang Hua, Ting Yu

*Corresponding author for this work

Research output: Contribution to conferencePaper

99 Scopus citations

Abstract

A new method for visual tracking of articulated objects is presented. Analyzing articulated motion is challenging because the dimensionality increase potentially demands tremendous increase of computation. To ease this problem, we propose an approach that analyzes subparts locally while reinforcing the structural constraints at the mean time. The computational model of the proposed approach is based on a dynamic Markov network, a generative model which characterizes the dynamics and the image observations of each individual subpart as well as the motion constraints among different subparts. Probabilistic variational analysis of the model reveals a mean field approximation to the posterior densities of each subparts given visual evidence, and provides a computationally efficient way for such a difficult Bayesian inference problem. In addition, we design mean field Monte Carlo (MFMC) algorithms, in which a set of low dimensional particle filters interact with each other and solve the high dimensional problem collaboratively. Extensive experiments on tracking human body pans demonstrate the effectiveness, significance and computational efficiency of the proposed method.

Original languageEnglish (US)
Pages1094-1101
Number of pages8
StatePublished - Dec 2 2003
EventNINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION - Nice, France
Duration: Oct 13 2003Oct 16 2003

Other

OtherNINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION
CountryFrance
CityNice
Period10/13/0310/16/03

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'Tracking articulated body by dynamic Markov network'. Together they form a unique fingerprint.

  • Cite this

    Wu, Y., Hua, G., & Yu, T. (2003). Tracking articulated body by dynamic Markov network. 1094-1101. Paper presented at NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, Nice, France.