Robust network-adaptive object-based video encoding

Haohong Wang*, Aggelos K Katsaggelos

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

Joint source-network encoding of object-based video is a very important and challenging research topic, which has not been adequately explored. In this paper, we propose a robust network-adaptive encoding approach for object-based video. The framework jointly considers source coding, packet loss during transmission, and error concealment at the decoding. The proposed method guarantees the minimum expected distortion for the decoded video, by optimally allocating the shape and texture coding parameters at the encoder. The resulting optimization problem is solved by Lagrangian relaxation and dynamic programming. Experimental results demonstrate that the proposed method has significant gains over the non network-adaptive method.

Original languageEnglish (US)
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume3
StatePublished - Sep 28 2004
EventProceedings - IEEE International Conference on Acoustics, Speech, and Signal Processing - Montreal, Que, Canada
Duration: May 17 2004May 21 2004

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Robust network-adaptive object-based video encoding'. Together they form a unique fingerprint.

Cite this