• Corpus ID: 215548680

Prune2Edge: A Multi-Phase Pruning Pipelines to Deep Ensemble Learning in IIoT

@article{Alhalabi2020Prune2EdgeAM,
  title={Prune2Edge: A Multi-Phase Pruning Pipelines to Deep Ensemble Learning in IIoT},
  author={Besher Alhalabi and Mohamed Medhat Gaber and Shadi Saleh Basurra},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.04710}
}
Most recently, with the proliferation of IoT devices, computational nodes in manufacturing systems IIoT(Industrial-Internet-of-things) and the lunch of 5G networks, there will be millions of connected devices generating a massive amount of data. In such an environment, the controlling systems need to be intelligent enough to deal with a vast amount of data to detect defects in a real-time process. Driven by such a need, artificial intelligence models such as deep learning have to be deployed… 
1 Citations

Figures and Tables from this paper

Defending Against Localized Adversarial Attacks on Edge-Deployed Monocular Depth Estimators

TLDR
This work proposes the first defense mechanism against adversarial patches for a regression network, in the context of Monocular Depth Estimation on an edge device, maintaining performance on clean images while also achieving near clean image levels of performance on adversarial inputs.

References

SHOWING 1-10 OF 39 REFERENCES

Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing

TLDR
A comprehensive survey of the recent research efforts on EI is conducted, which provides an overview of the overarching architectures, frameworks, and emerging key technologies for deep learning model toward training/inference at the network edge.

EnSyth: A Pruning Approach to Synthesis of Deep Learning Ensembles

TLDR
EnSyth, a deep learning ensemble approach to enhance the predictability of compact neural network’s models using different hyperparameters for a pruning method and ensemble learning to synthesise the outputs of the compressed models to compose a new pool of classifiers is described.

Communication-Efficient Learning of Deep Networks from Decentralized Data

TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.

To prune, or not to prune: exploring the efficacy of pruning for model compression

TLDR
Across a broad range of neural network architectures, large-sparse models are found to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.

Federated Learning: Strategies for Improving Communication Efficiency

TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.

PipeDream: Fast and Efficient Pipeline Parallel DNN Training

TLDR
Experiments with five different DNNs on two different clusters show that PipeDream is up to 5x faster in time-to-accuracy compared to data-parallel training.

Data-driven Task Allocation for Multi-task Transfer Learning on the Edge

TLDR
A novel task allocation scheme, which assigns more important tasks to more powerful edge devices to maximize the overall decision performance and innovatively proposes a Data-driven Cooperative Task Allocation (DCTA) approach.

FitNets: Hints for Thin Deep Nets

TLDR
This paper extends the idea of a student network that could imitate the soft output of a larger teacher network or ensemble of networks, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student.

Private and Scalable Personal Data Analytics Using Hybrid Edge-to-Cloud Deep Learning

TLDR
The authors present a hybrid framework where user-centered edge devices and resources can complement the cloud for providing privacy-aware, accurate, and efficient analytics.

Compressing Deep Convolutional Networks using Vector Quantization

TLDR
This paper is able to achieve 16-24 times compression of the network with only 1% loss of classification accuracy using the state-of-the-art CNN, and finds in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods.