TY - GEN
T1 - A composite discriminator for generative adversarial network based video super-resolution
AU - Wang, Xijun
AU - Lucas, Alice
AU - Lopez-Tapia, Santiago
AU - Wu, Xinyi
AU - Molina, Rafael
AU - Katsaggelos, Aggelos K.
N1 - Funding Information:
This work was supported in part by the Sony 2016 Research Award Program Research Project. The work of SLT and RM was supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869-C2-2-R and the Visiting Scholar program at the University of Granada. SLT received financial support through the Spanish FPU program.
Publisher Copyright:
© 2019,IEEE
PY - 2019/9
Y1 - 2019/9
N2 - Generative Adversarial Networks (GANs) have been used for solving the video super-resolution problem. So far, video super-resolution GAN-based methods use the traditional GAN framework which consists of a single generator and a single discriminator that are trained against each other. In this work we propose a new framework which incorporates two collaborative discriminators whose aim is to jointly improve the quality of the reconstructed video sequence. While one discriminator concentrates on general properties of the images, the second one specializes on obtaining realistically reconstructed features, such as, edges. Experiments results demonstrate that the learned model outperforms current state of the art models and obtains super-resolved frames, with fine details, sharp edges, and fewer artifacts.
AB - Generative Adversarial Networks (GANs) have been used for solving the video super-resolution problem. So far, video super-resolution GAN-based methods use the traditional GAN framework which consists of a single generator and a single discriminator that are trained against each other. In this work we propose a new framework which incorporates two collaborative discriminators whose aim is to jointly improve the quality of the reconstructed video sequence. While one discriminator concentrates on general properties of the images, the second one specializes on obtaining realistically reconstructed features, such as, edges. Experiments results demonstrate that the learned model outperforms current state of the art models and obtains super-resolved frames, with fine details, sharp edges, and fewer artifacts.
KW - Generative Adversarial Networks
KW - Spatially Adaptive
KW - The Composite Discriminator
KW - Video Super-Resolution
UR - http://www.scopus.com/inward/record.url?scp=85075610294&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075610294&partnerID=8YFLogxK
U2 - 10.23919/EUSIPCO.2019.8903073
DO - 10.23919/EUSIPCO.2019.8903073
M3 - Conference contribution
AN - SCOPUS:85075610294
T3 - European Signal Processing Conference
BT - EUSIPCO 2019 - 27th European Signal Processing Conference
PB - European Signal Processing Conference, EUSIPCO
T2 - 27th European Signal Processing Conference, EUSIPCO 2019
Y2 - 2 September 2019 through 6 September 2019
ER -