Dynamic Neural Networks: A Survey

@article{Han2021DynamicNN,
  title={Dynamic Neural Networks: A Survey},
  author={Yizeng Han and Gao Huang and Shiji Song and Le Yang and Honghui Wang and Yulin Wang},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2021},
  volume={PP}
}
  • Yizeng Han, Gao Huang, Yulin Wang
  • Published 9 February 2021
  • Computer Science
  • IEEE transactions on pattern analysis and machine intelligence
Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) sample-wise… 

Figures and Tables from this paper

SCAI: A Spectral data Classification framework with Adaptive Inference for the IoT platform
TLDR
This paper is the first attempt to conduct performance and efficiency optimization by adaptive inference computing architecture for the spectral detection problem under the IoT platform and the experimental results show that the proposed method is not only well able to achieve adaptive inference under different computational budgets and samples but also can achieve higher performance with less computational resources than existing methods.
A Multimodal Dynamic Neural Network for Call for Help Recognition in Elevators
TLDR
This work collects and constructed an audiovisual dataset dedicated to the proposed task of identifying real and fake calls for help in elevator scenes, and presents a novel instance-modality-wise dynamic framework to efficiently use the information from each modality and make inferences.
AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition
TLDR
This work reformulates the training of AdaFocus as a simple one-stage algorithm by introducing a differentiable interpolation-based patch selection operation, enabling efficient end-to-end optimization and presents an improved training scheme to address the issues introduced by the one- stages, including the lack of supervision, input diversity and training stability.
MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking
TLDR
A new dynamic modality-aware model generation module (named MFGNet) is proposed to boost the message communication between visible and thermal data by adaptively adjusting the convolutional kernels for various input images in practical tracking.
DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers
TLDR
A hardware-efficient dynamic inference regime, named dynamic weight slicing, that adaptively slices a part of network parameters for inference while keeping it stored statically and contiguously in hardware to prevent the extra burden of sparse computation.
Siamese Labels Auxiliary Learning
TLDR
This paper proposes a novel auxiliary training method, Siamese Labels Auxiliary Learning (SiLa), which improves the performance of common models without increasing test parameters and proves that SiLa can improve the generalization of the model.
Spatial–Spectral Cross-Correlation Embedded Dual-Transfer Network for Object Tracking Using Hyperspectral Videos
TLDR
This paper proposes a spatial–spectral cross-correlation embedded dual-transfer network (SSDT-Net), and proposes to use transfer learning to transfer the knowledge of traditional color videos to the HS tracking task and develop a dual- transfer strategy to gauge the similarity between the source and target domain.
Efficient High-Resolution Deep Learning: A Survey
TLDR
This survey describes suchcient high-resolution deep learning methods, summarizes real- world applications of high- Resolution deep learning, and provides comprehensive information about available high- resolution datasets.
Robust Knowledge Adaptation for Dynamic Graph Neural Networks
TLDR
The proposed AdaNet is the first attempt to explore robust knowledge adaptation via reinforcement learning for dynamic graph neural networks and can adaptively propagate knowledge to other nodes for learning robust node embedding representations.
NSNet: Non-saliency Suppression Sampler for Efficient Video Recognition
TLDR
A novel Non-saliency Suppression Network (NSNet) is proposed, which effectively suppresses the responses of non-salient frames and not only achieves the state-of-the-art accuracy-efficiency trade-off but also presents a significantly faster practical inference speed than state- of- the-art methods.
...
...

References

SHOWING 1-10 OF 309 REFERENCES
Channel Selection Using Gumbel Softmax
TLDR
This work uses a combination of batch activation loss and classification loss, and Gumbel reparameterization to learn network structure, and proposes a single end-to-end framework that can improve inference efficiency in both settings.
BERT Loses Patience: Fast and Robust Inference with Early Exit
TLDR
The proposed Patience-based Early Exit method couples an internal-classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for a pre-defined number of steps, improving inference efficiency and improving accuracy and robustness.
CBAM: Convolutional Block Attention Module
TLDR
The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs.
Universal Transformers
TLDR
The Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses issues of parallelizability and global receptive field, is proposed.
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
TLDR
This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
BranchyNet: Fast inference via early exiting from deep neural networks
TLDR
The BranchyNet architecture is presented, a novel deep network architecture that is augmented with additional side branch classifiers that can both improve accuracy and significantly reduce the inference time of the network.
Endto-end learning of decision trees and forests
  • 2019
Dynamic Network Quantization for Efficient Video Inference
TLDR
A dynamic network quantization framework, that selects optimal precision for each frame conditioned on the input for efficient video recognition is proposed, that provides significant savings in computation and memory usage while outperforming the existing state-of-the-art methods.
BlockCopy: High-Resolution Video Processing with Block-Sparse Feature Propagation and Online Policies
TLDR
A scheme that accelerates pretrained frame-based CNNs to process video more efficiently, compared to standard frame-by-frame processing, and achieves significant FLOPS savings and inference speedup with minimal impact on accuracy.
IA-RED2: Interpretability-Aware Redundancy Reduction for Vision Transformers
TLDR
It is demonstrated that the interpretability that naturally emerged in the I-RED framework can outperform the raw attention learned by the original visual transformer, as well as those generated by off-the-shelf interpretation methods, with both qualitative and quantitative results.
...
...