Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices

@article{Teerapittayanon2017DistributedDN,
  title={Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices},
  author={Surat Teerapittayanon and Bradley McDanel and H. T. Kung},
  journal={2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)},
  year={2017},
  pages={328-339}
}
We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. [] Key Method In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance…

Conditionally Deep Hybrid Neural Networks Across Edge and Cloud

An extensive design space exploration is performed with the goal of minimizing energy consumption at the edge while achieving state-of-the-art classification accuracies on image classification tasks and gives insights on designing efficient hybrid networks which achieve significantly higher energy efficiency than full-precision networks for edge-cloud based distributed intelligence systems.

Towards Ubiquitous Intelligent Computing: Heterogeneous Distributed Deep Neural Networks

This work proposes Heterogeneous Distributed Deep Neural Network (HDDNN) over the distributed hierarchy, targeting at ubiquitous intelligent computing, and is capable of utilizing hierarchical distributed system better for DNN and tailoring DNN for real-world distributed system properly.

FLEE: A Hierarchical Federated Learning Framework for Distributed Deep Neural Network over Cloud, Edge, and End Device

This article comprehensively considers various data distributions on end devices and edges, proposing a hierarchical federated learning framework FLEE, which can realize dynamical updates of models without redeploying them and can improve model performances under all kinds of data distributions.

Resource-Efficient Distributed Deep Neural Networks Empowered by Intelligent Software-Defined Networking

A novel and explicit Intelligent Software Defined Networking (ISDN) that aims to manage the bandwidth and computing resources across the network via the SDN paradigm and proposes a Markov Decision Process (MDP) based dynamic task offloading model to achieve the optimal offloading policy of DNN tasks.

Guardians of the Deep Fog: Failure-Resilient DNN Inference from Edge to Cloud

  • Ashkan YousefpourSiddartha Devic J. Jue
  • Computer Science
    Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things
  • 2019
This work introduces deepFogGuard, a DNN architecture augmentation scheme for making the distributed DNN inference task failure-resilient, and introduces skip hyperconnections in distributed Dnns, which are the basis of deepF FogGuard's design to provide resiliency.

Fully Distributed Deep Learning Inference on Resource-Constrained Edge Devices

This paper jointly optimize memory, computation and communication demands for distributed execution of complete neural networks covering all layers through techniques that combine both feature and weight partitioning with a communication-aware layer fusion approach to enable holistic optimization across layers.

Towards Real-time Cooperative Deep Inference over the Cloud and Edge End Devices

This paper forms the optimal DNN partition as a min-cut problem in a directed acyclic graph (DAG) specially derived from the DNN and proposes a novel two-stage approach named quick deep model partition (QDMP), which enables real-time cooperative deep inference over the cloud and edge end devices.

Performance analysis of local exit for distributed deep neural networks over cloud and edge computing

This study analyzes the effect of models with single and multiple local exits on DNN inference in an edge‐computing environment and shows that a single‐exit model performs better with respect to the number of local exited samples, inference accuracy, and inference latency than a multi‐ exit model at all exit points.

Toward Collaborative Inferencing of Deep Neural Networks on Internet-of-Things Devices

This article enables the utilization of the aggregated computing power of several IoT devices by creating a local collaborative network for a subset of DNNs, visual-based applications, and enhances the collaborative system byCreating a balanced and distributed processing pipeline while adjusting the tasks in real time.

Survey of deployment locations and underlying hardware architectures for contemporary deep neural networks

This article explores which type of underlying hardware and architectural approach is best used in various deployment locations when implementing deep neural networks and divides the existing solutions into 12 different categories, organized in two dimensions.
...

References

SHOWING 1-10 OF 22 REFERENCES

FireCaffe: Near-Linear Acceleration of Deep Neural Network Training on Compute Clusters

FireCaffe is presented, which successfully scales deep neural network training across a cluster of GPUs, and finds that reduction trees are more efficient and scalable than the traditional parameter server approach.

Large Scale Distributed Deep Networks

This paper considers the problem of training a deep network with billions of parameters using tens of thousands of CPU cores and develops two algorithms for large-scale distributed training, Downpour SGD and Sandblaster L-BFGS, which increase the scale and speed of deep network training.

Embedded Binarized Neural Networks

eBNN reorders the computation of inference while preserving the original BNN structure, and uses just a single floating-point temporary for the entire neural network, leading to a 32x reduction in required temporary space.

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN.

Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing

The Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

The Binary-Weight-Network version of AlexNet is compared with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than \(16\,\%\) in top-1 accuracy.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Edge Computing: Vision and Challenges

The definition of edge computing is introduced, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge Computing.

BranchyNet: Fast inference via early exiting from deep neural networks

The BranchyNet architecture is presented, a novel deep network architecture that is augmented with additional side branch classifiers that can both improve accuracy and significantly reduce the inference time of the network.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.