Towards Enabling Dynamic Convolution Neural Network Inference for Edge Intelligence

@article{Adeyemo2022TowardsED,
  title={Towards Enabling Dynamic Convolution Neural Network Inference for Edge Intelligence},
  author={Adewale Adeyemo and Travis Sandefur and Tolulope A. Odetola and Syed Rafay Hasan},
  journal={2022 IEEE International Symposium on Circuits and Systems (ISCAS)},
  year={2022},
  pages={1833-1837}
}
Deep learning applications have achieved great success in numerous real-world applications. Deep learning models, especially Convolution Neural Networks (CNN) are often prototyped using FPGA because it offers high power efficiency, and reconfigurability. The deployment of CNNs on FPGAs follows a design cycle that requires saving of model parameters in the on-chip memory during High level synthesis (HLS). Recent advances in edge intelligence requires CNN inference on edge network to increase… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 14 REFERENCES

DeepSlicing: Collaborative and Adaptive CNN Inference With Low Latency

DeepSlicing is presented, a collaborative and adaptive inference system that adapts to various CNNs and supports customized flexible fine-grained scheduling and an efficient scheduler, Proportional Synchronized Scheduler (PSS), which achieves the trade-off between computation and synchronization.

Fused-layer CNN accelerators

This work finds that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers, and is able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers.

Increasing Flexibility of FPGA-based CNN Accelerators with Dynamic Partial Reconfiguration

This work presents a dynamically reconfigurable CNN accelerator architecture that does not sacrifice throughput performance or classification accuracy and devise a novel approach, to the best of the knowledge, to hide the computations of the pooling layers inside the convolutional layers, thereby further improving throughput.

DeeperThings: Fully Distributed CNN Inference on Resource-Constrained Edge Devices

This article proposes DeeperThings, an approach that supports a full distribution of CNN inference tasks by partitioning fully-connected as well as both feature- and weight-intensive convolutional layers and jointly optimize memory, computation and communication demands.

Memory-Aware Fusing and Tiling of Neural Networks for Accelerated Edge Inference

A memory usage predictor coupled with a search algorithm to provide optimized fusing and tiling configurations for an arbitrary set of convolutional layers and results show that this approach can run in less than half the memory and with a speedup of up to 2.78 under severe memory constraints.

FeSHI: Feature Map-Based Stealthy Hardware Intrinsic Attack

This paper exploited this attack surface to propose an HT-based attack called FeSHI, which exploits the statistical distribution of the layer-by-layer feature maps of the CNN to design two triggers for stealthy HT with a very low probability of triggering.

MoDNN: Local distributed mobile computing system for Deep Neural Network

MoDNN is proposed — a local distributed mobile computing system for DNN applications that can partition already trained DNN models onto several mobile devices to accelerate DNN computations by alleviating device-level computing cost and memory usage.

Convergence of Edge Computing and Deep Learning: A Comprehensive Survey

By consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge, i.e., Edge DL.

Security Analysis of Capsule Network Inference using Horizontal Collaboration

This analysis perturbed the feature maps of the different layers of four DNN models, i.e., CapsNet, mini-VGGNet, LeNet, and an in-house designed CNN with the same number of parameters as CapsNet using two types of noised-based attacks, showing that similar to the traditional CNNs, the classification accuracy of the CapsNet drops significantly.

Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing

This article first introduces deep learning for IoTs into the edge computing environment, and designs a novel offloading strategy to optimize the performance of IoT deep learning applications with edge computing.