Corpus ID: 233388124

Skip-Convolutions for Efficient Video Processing

@article{Habibian2021SkipConvolutionsFE,
  title={Skip-Convolutions for Efficient Video Processing},
  author={A. Habibian and Davide Abati and T. Cohen and B. E. Bejnordi},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.11487}
}
We propose Skip-Convolutions to leverage the large amount of redundancies in video streams and save computations. Each video is represented as a series of changes across frames and network activations, denoted as residuals. We reformulate standard convolution to be efficiently computed on residual frames: each layer is coupled with a binary gate deciding whether a residual is important to the model prediction, e.g. foreground regions, or it can be safely skipped, e.g. background regions. These… Expand

References

SHOWING 1-10 OF 69 REFERENCES
Dynamic Convolutions: Exploiting Spatial Sparsity for Faster Inference
SBNet: Sparse Blocks Network for Fast Inference
Clockwork Convnets for Video Semantic Segmentation
LSTM Pose Machines
Temporally Distributed Networks for Fast Video Semantic Segmentation
TSM: Temporal Shift Module for Efficient Video Understanding
Low-Latency Video Semantic Segmentation
Deep Feature Flow for Video Recognition
...
1
2
3
4
5
...