TY - GEN
T1 - Gated recurrent networks for video super resolution
AU - López-Tapia, Santiago
AU - Lucas, Alice
AU - Molina, Rafael
AU - Katsaggelos, Aggelos K.
N1 - Funding Information:
This work was supported in part by the Sony 2016 Research Award Program Research Project. The work of SLT and RM was supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869-C2-2-R and the Visiting Scholar program at the University of Granada. SLT received financial support through the Spanish FPU program.
Publisher Copyright:
© 2021 European Signal Processing Conference, EUSIPCO. All rights reserved.
PY - 2021/1/24
Y1 - 2021/1/24
N2 - Despite the success of Recurrent Neural Networks in tasks involving temporal video processing, few works in Video Super-Resolution (VSR) have employed them. In this work we propose a new Gated Recurrent Convolutional Neural Network for VSR adapting some of the key components of a Gated Recurrent Unit. Our model employs a deformable attention module to align the features calculated at the previous time step with the ones in the current step and then uses a gated operation to combine them. This allows our model to effectively reuse previously calculated features and exploit longer temporal relationships between frames without the need of explicit motion compensation. The experimental validation shows that our approach outperforms current VSR learning based models in terms of perceptual quality and temporal consistency.
AB - Despite the success of Recurrent Neural Networks in tasks involving temporal video processing, few works in Video Super-Resolution (VSR) have employed them. In this work we propose a new Gated Recurrent Convolutional Neural Network for VSR adapting some of the key components of a Gated Recurrent Unit. Our model employs a deformable attention module to align the features calculated at the previous time step with the ones in the current step and then uses a gated operation to combine them. This allows our model to effectively reuse previously calculated features and exploit longer temporal relationships between frames without the need of explicit motion compensation. The experimental validation shows that our approach outperforms current VSR learning based models in terms of perceptual quality and temporal consistency.
KW - Convolutional Neuronal Networks
KW - Recurrent Neural Networks
KW - Super-resolution
KW - Video
UR - http://www.scopus.com/inward/record.url?scp=85099314643&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099314643&partnerID=8YFLogxK
U2 - 10.23919/Eusipco47968.2020.9287713
DO - 10.23919/Eusipco47968.2020.9287713
M3 - Conference contribution
AN - SCOPUS:85099314643
T3 - European Signal Processing Conference
SP - 700
EP - 704
BT - 28th European Signal Processing Conference, EUSIPCO 2020 - Proceedings
PB - European Signal Processing Conference, EUSIPCO
T2 - 28th European Signal Processing Conference, EUSIPCO 2020
Y2 - 24 August 2020 through 28 August 2020
ER -