• Corpus ID: 237210117

Early-exit deep neural networks for distorted images: providing an efficient edge offloading

@article{Pacheco2021EarlyexitDN,
  title={Early-exit deep neural networks for distorted images: providing an efficient edge offloading},
  author={Roberto Gonçalves Pacheco and Fernanda D.V.R. Oliveira and Rodrigo De Souza Couto},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.09343}
}
Edge offloading for deep neural networks (DNNs) can be adaptive to the input’s complexity by using earlyexit DNNs. These DNNs have side branches throughout their architecture, allowing the inference to end earlier in the edge. The branches estimate the accuracy for a given input. If this estimated accuracy reaches a threshold, the inference ends on the edge. Otherwise, the edge offloads the inference to the cloud to process the remaining DNN layers. However, DNNs for image classification deals… 

Figures and Tables from this paper

Towards Edge Computing Using Early-Exit Convolutional Neural Networks
TLDR
The experiments show that the early classification of CNNs with early exits can reduce the data load and the inference time without imposing losses to the application performance.

References

SHOWING 1-10 OF 20 REFERENCES
Dynamic Adaptive DNN Surgery for Inference Acceleration on the Edge
TLDR
The DNN surgery is designed, which allows partitioned DNN processed at both the edge and cloud while limiting the data transmission, and a Dynamic Adaptive DNN Surgery (DADS) scheme, which optimally partitions the DNN under different network condition.
Quality Robust Mixtures of Deep Neural Networks
TLDR
This work proposes a mixture of experts-based ensemble method, MixQual net, that is robust to multiple different types of distortions, and introduces weight sharing into the MixQualNet, utilizing the TreeNet weight sharing architecture as well as introducing the Inverted TreeNet architecture.
Inference Time Optimization Using BranchyNet Partitioning
TLDR
This work addresses the problem of partitioning a BranchyNet, which is a DNN type where the inference can stop at the middle layers, and shows that this partitioning can be treated as the shortest path problem, and thus solved in polynomial time.
Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices
TLDR
Compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.
DeepCorrect: Correcting DNN Models Against Image Distortions
TLDR
A metric to identify the most noise susceptible convolutional filters and rank them in order of the highest gain in classification accuracy upon correction is proposed and significantly improves the robustness of DNNs against distorted images and outperforms other alternative approaches.
BranchyNet: Fast inference via early exiting from deep neural networks
TLDR
The BranchyNet architecture is presented, a novel deep network architecture that is augmented with additional side branch classifiers that can both improve accuracy and significantly reduce the inference time of the network.
SPINN: synergistic progressive inference of neural networks over device and cloud
TLDR
SPINN is proposed, a distributed inference system that employs synergistic device-cloud computation together with a progressive inference method to deliver fast and robust CNN inference across diverse settings, and provides robust operation under uncertain connectivity conditions and significant energy savings compared to cloud-centric execution.
Understanding how image quality affects deep neural networks
  • Samuel F. Dodge, Lina Karam
  • Computer Science
    2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)
  • 2016
TLDR
An evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions shows that the existing networks are susceptible to these quality distortions, particularly to blur and noise.
On classification of distorted images with deep convolutional neural networks
TLDR
The results suggest that, under certain conditions, fine-tuning with noisy images can alleviate much effect due to distorted inputs, and is more practical than re-training.
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
TLDR
This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
...
1
2
...