Model-based synthetic view generation from a monocular video sequence

Chun Jen Tsai*, Peter Eisert, Bernd Girod, Aggelos K Katsaggelos

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

In this paper a model-based multi-view image generation system for video conferencing is presented. The system assumes that a 3-D model of the person in front of the camera is available. It extracts texture from a speaking person sequence images and maps it to the static 3-D model during the video conference session. Since only the incrementally updated texture information is transmitted during the whole session, the bandwidth requirement is very small. Based on the experimental results one can conclude that the proposed system is very promising for practical applications.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Image Processing
PublisherIEEE Comp Soc
Pages444-447
Number of pages4
Volume1
StatePublished - Dec 1 1997
EventProceedings of the 1997 International Conference on Image Processing. Part 2 (of 3) - Santa Barbara, CA, USA
Duration: Oct 26 1997Oct 29 1997

Other

OtherProceedings of the 1997 International Conference on Image Processing. Part 2 (of 3)
CitySanta Barbara, CA, USA
Period10/26/9710/29/97

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Model-based synthetic view generation from a monocular video sequence'. Together they form a unique fingerprint.

Cite this