Flexible Asynchronous Space-Time (FAST) Imaging

Project: Research project

Project Details


In this proposal, we focus on the problem of optimal information extraction in wide-area surveillance imaging applications using high resolution sensors. The goal is to analyze motion (e.g., detect/track moving objects) within the scene over the entire FoV. The primary challenge is that the data bandwidth of the ROIC circuitry limits the maximum number of bits/sec that may be delivered from the sensor to the host device connected to the FPA.

Today’s commercial focal plane arrays (FPAs) offer a variety of controls over the spatio-temporal sampling properties of the sensor.

We propose a methodology based on adaptive learning for guiding a sensor, through real-time adaptation of sensor control parameters, to collect data with the highest content of useful information. This will result in an autonomous system, as opposed to a used defined system, that configures the sensor to collect the most relevant data based on scene content, thus enabling complex measurement with high resolution imagery. The methodology is based on a computational imaging/compressed sensing approach: We consider the FPA and ROIC itself as a means to perform information encoding. The most unique aspect of our approach is that we at once determine an optimal space-time sampling, and at the same time optimally reconstruct the space-time volume (e.g., a video reconstruction) from measurements achieved via this sampling strategy. Here we propose to generalize optimization of space-time sampling patterns to include not just deterministic models but predictive ones as well (i.e., sampling patterns are produced on-the-fly based on previous measurements).
We propose the Flexible Asynchronous Space-Time (FAST) image sensing architecture. The FAST architecture will allow dynamic, reconfigurable, and content adaptive sensing of spatio-temporal information with optimal bandwidth utilization. We pose the optimization problem as a resource allocation problem, i.e., given the constraint on the desired/allowable data bandwidth we estimate the best possible tessellation of the 3D spatio-temporal volume. The metric used for the success of the proposed approach is the quality of the reconstructed video at a given data bandwidth. Phases 1 and 2 will focus primarily on the Capability Advancement criteria. At the end of Phase 1, we will demonstrate streaming operation of the algorithm with a 1000x slowdown in real time, while at the end of Phase 2 this slowdown factor will be 10x. Phase 3 will focus on Implementability of the approach. At the end of Phase 3 we will demonstrate operations in a real-time camera model that responds to feedback from our algorithms and demonstrates the full Capabilities Advancement demonstrated in Phases 1-2.
Effective start/end date4/25/179/24/20


  • Defense Advanced Research Projects Agency (DARPA) (HR0011-17-2-0044)


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.