TY - JOUR
T1 - Joint filtering of intensity images and neuromorphic events for high-resolution noise-robust imaging
AU - Wang, Zihao W.
AU - Duan, Peiqi
AU - Cossairt, Oliver
AU - Katsaggelos, Aggelos
AU - Huang, Tiejun
AU - Shi, Boxin
N1 - Funding Information:
This work is in part supported by National Natural Science Foundation of China under Grant No. 61872012, National Key R&D Program of China (2019YFF0302902), Beijing Academy of Artificial Intelligence (BAAI), DARPA Contract No. HR0011-17-2-0044, and NSF CAREER IIS-1453192.
Publisher Copyright:
© 2020 IEEE
PY - 2020
Y1 - 2020
N2 - We present a novel computational imaging system with high resolution and low noise. Our system consists of a traditional video camera which captures high-resolution intensity images, and an event camera which encodes high-speed motion as a stream of asynchronous binary events. To process the hybrid input, we propose a unifying framework that first bridges the two sensing modalities via a noise-robust motion compensation model, and then performs joint image filtering. The filtered output represents the temporal gradient of the captured space-time volume, which can be viewed as motion-compensated event frames with high resolution and low noise. Therefore, the output can be widely applied to many existing event-based algorithms that are highly dependent on spatial resolution and noise robustness. In experimental results performed on both publicly available datasets as well as our new RGB-DAVIS dataset, we show systematic performance improvement in applications such as high frame-rate video synthesis, feature/corner detection and tracking, as well as high dynamic range image reconstruction.
AB - We present a novel computational imaging system with high resolution and low noise. Our system consists of a traditional video camera which captures high-resolution intensity images, and an event camera which encodes high-speed motion as a stream of asynchronous binary events. To process the hybrid input, we propose a unifying framework that first bridges the two sensing modalities via a noise-robust motion compensation model, and then performs joint image filtering. The filtered output represents the temporal gradient of the captured space-time volume, which can be viewed as motion-compensated event frames with high resolution and low noise. Therefore, the output can be widely applied to many existing event-based algorithms that are highly dependent on spatial resolution and noise robustness. In experimental results performed on both publicly available datasets as well as our new RGB-DAVIS dataset, we show systematic performance improvement in applications such as high frame-rate video synthesis, feature/corner detection and tracking, as well as high dynamic range image reconstruction.
UR - http://www.scopus.com/inward/record.url?scp=85094314368&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094314368&partnerID=8YFLogxK
U2 - 10.1109/CVPR42600.2020.00168
DO - 10.1109/CVPR42600.2020.00168
M3 - Conference article
AN - SCOPUS:85094314368
SN - 1063-6919
SP - 1606
EP - 1616
JO - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
JF - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
M1 - 9156457
T2 - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
Y2 - 14 June 2020 through 19 June 2020
ER -