Event-based Synthetic Aperture Imaging with a Hybrid Network

@article{Zhang2021EventbasedSA,
  title={Event-based Synthetic Aperture Imaging with a Hybrid Network},
  author={Xiang Zhang and Wei Liao and Lei Yu and Wen Yang and Guisong Xia},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={14230-14239}
}
  • Xiang ZhangWei Liao Guisong Xia
  • Published 3 March 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Synthetic aperture imaging (SAI) is able to achieve the see through effect by blurring out the off-focus foreground occlusions and reconstructing the in-focus occluded targets from multi-view images. However, very dense occlusions and extreme lighting conditions may bring significant disturbances to the SAI based on conventional frame-based cameras, leading to performance degeneration. To address these problems, we propose a novel SAI system based on the event camera which can produce… 

Figures from this paper

Learning to See Through with Events

This paper presents an Event-based SAI (E-SAI) method by relying on the asynchronous events with extremely low latency and high dynamic range acquired by an event camera to produce high-quality images from pure events.

Synthetic Aperture Imaging with Events and Frames

This paper leverages the merits of both events and frames, leading to a fusion-based SAl that performs consistently under the different densities of occlusions and achieves superior performance to the state-of-the-art SAl methods.

Real-Time Hetero-Stereo Matching for Event and Frame Camera With Aligned Events Using Maximum Shift Distance

This work proposes an accurate, intuitive and efficient way to align events with 6-DOF camera motion, by suggesting the maximum shift distance method, and can estimate poses of an event camera and depth of events in a few frames, which can speed up the initialization of the event camera system.

SCSE-E2VID: Improved event-based video reconstruction with an event camera

This paper proposes an end-to-end UNet network called SCSE-E2VID to synthesize gray images from asynchronous events and designs an event fusion block to feed more related events to the encoder, allowing the network to extract more valuable features.

Boosting Event Stream Super-Resolution with a Recurrent Neural Network

A recurrent neural network for event SR without frames is proposed, which builds a temporal propagation net for incorporating neighboring and long-range event-aware contexts that facilitates event SR and a spatiotemporal fusion net for reliably aggregating the spatiotsemporal clues of event stream.

Image De-occlusion via Event-enhanced Multi-modal Fusion Hybrid Network

This paper proposes an event-enhanced multi-modal fusion hybrid network for image de-occlusion, which uses event streams to provide complete scene information and frames to provide color and texture information and achieves state-of-the-art performance.

AEGNN: Asynchronous Event-based Graph Neural Networks

This work introduces Asynchronous, Event-based Graph Neural Networks (AEGNNs), a novel event-processing paradigm that generalizes standard GNNs to process events as “evolving” spatio-temporal graphs, thereby significantly reducing both computation and latency for event-by-event processing.

Are High-Resolution Event Cameras Really Needed?

It is reported that, in low-illumination conditions and at high speeds, low-resolution cameras can outperform high-resolution ones, while requiring a significantly lower bandwidth.

Event-based Video Reconstruction via Potential-assisted Spiking Neural Network

This paper proposes a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron, and finds that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.

Deep Learning for HDR Imaging: State-of-the-Art and Future Trends

  • Lin WangKuk-Jin Yoon
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2022
This study hierarchically and structurally group existing deep HDR imaging methods into five categories based on a number/domain of input exposures, number of learning tasks, novel sensor data, novel learning strategies, and applications, and provides a constructive discussion on each category regarding its potential and challenges.

References

SHOWING 1-10 OF 32 REFERENCES

Event Enhanced High-Quality Image Recovery

An explainable network, an event-enhanced sparse learning network (eSL-Net), to recover the high-quality images from event cameras and can largely improve the performance of the state-of-the-art by 7-12 dB.

DeOccNet: Learning to See Through Foreground Occlusions in Light Fields

This paper handles the LF de-occlusion (LF-DeOcc) problem using a deep encoder-decoder network (namely, DeOccNet), and is the first deep learning-based LF-De Occ method.

Synthetic aperture imaging using pixel labeling via energy minimization

Continuously tracking and see-through occlusion based on a new hybrid synthetic aperture imaging model

This algorithm is the first time to solve the occluded people imaging and tracking problem in a joint multiple camera synthetic aperture imaging domain and can reliably locate and see people in challenge scene.

Seeing Beyond Foreground Occlusion: A Joint Framework for SAP-Based Scene Depth and Appearance Reconstruction

This paper first characterize the differences between multiview reconstruction with and without foreground occlusion, and proposes an iterative reconstruction approach in global optimization framework, in which the reconstruction results are refined by applying a coarse-to-fine strategy.

Event-Based Vision: A Survey

This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.

Events-To-Video: Bringing Modern Computer Vision to Event Cameras

This work proposes a novel, recurrent neural network to reconstruct videos from a stream of events and train it on a large amount of simulated event data, which surpasses state-of-the-art reconstruction methods by a large margin and opens the door to bringing the outstanding properties of event cameras to an entirely new range of tasks.

Retina-Like Visual Image Reconstruction via Spiking Neural Model

The proposed architecture consists of motion local excitation layer, spike refining layer and visual reconstruction layer motivated by bio-realistic leaky integrate and fire neurons and synapse connection with spike-timing-dependent plasticity (STDP) rules which is flexible in reconstructing full texture of natural scenes from the totally new spike data.

Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios

The first state estimation pipeline that leverages the complementary advantages of events, standard frames, and inertial measurements by fusing in a tightly coupled manner is presented, leading to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable.