TY - GEN
T1 - Event-driven video frame synthesis
AU - Wang, Zihao W.
AU - Jiang, Weixin
AU - He, Kuan
AU - Shi, Boxin
AU - Katsaggelos, Aggelos
AU - Cossairt, Oliver
N1 - Funding Information:
This work was supported in part by a DARPA Contract No.HR0011-17-2-0044.
Funding Information:
This work was supported in part by a DARPA Contract No. HR0011-17-2-0044.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Temporal Video Frame Synthesis (TVFS) aims at synthesizing novel frames at timestamps different from existing frames, which has wide applications in video codec, editing and analysis. In this paper, we propose a high frame-rate TVFS framework which takes hybrid input data from a low-speed frame-based sensor and a high-speed event-based sensor. Compared to frame-based sensors, event-based sensors report brightness changes at very high speed, which may well provide useful spatio-temoral information for high frame-rate TVFS. Therefore, we first introduce a differentiable fusion model to approximate the dual-modal physical sensing process, unifying a variety of TVFS scenarios, e.g., interpolation, prediction and motion deblur. Our differentiable model enables iterative optimization of the latent video tensor via autodifferentiation, which propagates the gradients of a loss function defined on the measured data. Our differentiable model-based reconstruction does not involve training, yet is parallelizable and can be implemented on machine learning platforms (such as TensorFlow). Second, we develop a deep learning strategy to enhance the results from the first step, which we refer as a residual 'denoising' process. Our trained 'denoiser' is beyond Gaussian denoising and shows properties such as contrast enhancement and motion awareness. We show that our framework is capable of handling challenging scenes including both fast motion and strong occlusions.
AB - Temporal Video Frame Synthesis (TVFS) aims at synthesizing novel frames at timestamps different from existing frames, which has wide applications in video codec, editing and analysis. In this paper, we propose a high frame-rate TVFS framework which takes hybrid input data from a low-speed frame-based sensor and a high-speed event-based sensor. Compared to frame-based sensors, event-based sensors report brightness changes at very high speed, which may well provide useful spatio-temoral information for high frame-rate TVFS. Therefore, we first introduce a differentiable fusion model to approximate the dual-modal physical sensing process, unifying a variety of TVFS scenarios, e.g., interpolation, prediction and motion deblur. Our differentiable model enables iterative optimization of the latent video tensor via autodifferentiation, which propagates the gradients of a loss function defined on the measured data. Our differentiable model-based reconstruction does not involve training, yet is parallelizable and can be implemented on machine learning platforms (such as TensorFlow). Second, we develop a deep learning strategy to enhance the results from the first step, which we refer as a residual 'denoising' process. Our trained 'denoiser' is beyond Gaussian denoising and shows properties such as contrast enhancement and motion awareness. We show that our framework is capable of handling challenging scenes including both fast motion and strong occlusions.
KW - Event based vision
KW - Motion deblur
KW - Multi modal sensor fusion
KW - Video frame interpolation
KW - Video frame prediction
UR - http://www.scopus.com/inward/record.url?scp=85082496976&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082496976&partnerID=8YFLogxK
U2 - 10.1109/ICCVW.2019.00532
DO - 10.1109/ICCVW.2019.00532
M3 - Conference contribution
AN - SCOPUS:85082496976
T3 - Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019
SP - 4320
EP - 4329
BT - Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019
Y2 - 27 October 2019 through 28 October 2019
ER -