Deep fully-connected networks for video compressive sensing

Michael Iliadis*, Leonidas Spinoulas, Aggelos K. Katsaggelos

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

71 Scopus citations


In this work we present a deep learning framework for video compressive sensing. The proposed formulation enables recovery of video frames in a few seconds at significantly improved reconstruction quality compared to previous approaches. Our investigation starts by learning a linear mapping between video sequences and corresponding measured frames which turns out to provide promising results. We then extend the linear formulation to deep fully-connected networks and explore the performance gains using deeper architectures. Our analysis is always driven by the applicability of the proposed framework on existing compressive video architectures. Extensive simulations on several video sequences document the superiority of our approach both quantitatively and qualitatively. Finally, our analysis offers insights into understanding how dataset sizes and number of layers affect reconstruction performance while raising a few points for future investigation.

Original languageEnglish (US)
Pages (from-to)9-18
Number of pages10
JournalDigital Signal Processing: A Review Journal
StatePublished - Jan 2018


  • Deep neural networks
  • Fully-connected networks
  • Video compressive sensing

ASJC Scopus subject areas

  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Statistics, Probability and Uncertainty
  • Computational Theory and Mathematics
  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Applied Mathematics


Dive into the research topics of 'Deep fully-connected networks for video compressive sensing'. Together they form a unique fingerprint.

Cite this