Generative adversarial networks and perceptual losses for video super-resolution

Alice Lucas, Santiago Lopez-Tapia, Rafael Molina, Aggelos K. Katsaggelos

Research output: Contribution to journalArticlepeer-review

Abstract

Video super-resolution (VSR) has become one of the most critical problems in video processing. In the deep learning literature, recent works have shown the benefits of using adversarial-based and perceptual losses to improve the performance on various image restoration tasks; however, these have yet to be applied for video super-resolution. In this work, we propose a Generative Adversarial Network(GAN)-based formulation for VSR. We introduce a new generator network optimized for the VSR problem, named VSRResNet, along with a new discriminator architecture to properly guide VSRResNet during the GAN training. We further enhance our VSR GAN formulation with two regularizers, a distance loss in feature-space and pixel-space, to obtain our final VSRResFeatGAN model. We show that pre-training our generator with the Mean-Squared-Error loss only quantitatively surpasses the current state-of-the-art VSR models. Finally, we employ the PercepDist metric ([2]) to compare state-of-the-art VSR models. We show that this metric more accurately evaluates the perceptual quality of SR solutions obtained from neural networks, compared with the commonly used PSNR/SSIM metrics. Finally, we show that our proposed model, the VSRResFeatGAN model, outperforms current state-of-the-art SR models, both quantitatively and qualitatively.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Jun 14 2018

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Generative adversarial networks and perceptual losses for video super-resolution'. Together they form a unique fingerprint.

Cite this