Gan-Based Video Super-Resolution with Direct Regularized Inversion of the Low-Resolution Formation Model

Santiago Lopez-Tapia, Alice Lucas, Rafael Molina, Aggelos K. Katsaggelos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

While high and ultra high definition displays are becoming popular, most of the available content has been acquired at much lower resolutions. In this work we propose to pseudo-invert with regularization the image formation model using GANs and perceptual losses. Our model, which does not require the use of motion compensation, utilizes explicitly the low resolution image formation model and additionally introduces two feature losses which are used to obtain perceptually improved high resolution images. The experimental validation shows that our approach outperforms current video super resolution learning based models.

Original languageEnglish (US)
Title of host publication2019 IEEE International Conference on Image Processing, ICIP 2019 - Proceedings
PublisherIEEE Computer Society
Pages2886-2890
Number of pages5
ISBN (Electronic)9781538662496
DOIs
StatePublished - Sep 2019
Event26th IEEE International Conference on Image Processing, ICIP 2019 - Taipei, Taiwan, Province of China
Duration: Sep 22 2019Sep 25 2019

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2019-September
ISSN (Print)1522-4880

Conference

Conference26th IEEE International Conference on Image Processing, ICIP 2019
Country/TerritoryTaiwan, Province of China
CityTaipei
Period9/22/199/25/19

Keywords

  • Convolutional Neuronal Networks
  • Generative Adversarial Networks
  • Perceptual Loss Functions
  • Super-resolution
  • Video

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Gan-Based Video Super-Resolution with Direct Regularized Inversion of the Low-Resolution Formation Model'. Together they form a unique fingerprint.

Cite this