Generative and Discriminative Deep Belief Network Classifiers: Comparisons Under an Approximate Computing Framework
@article{Ruan2021GenerativeAD, title={Generative and Discriminative Deep Belief Network Classifiers: Comparisons Under an Approximate Computing Framework}, author={Siqiao Ruan and Ian Colbert and Kenneth Kreutz-Delgado and Srinjoy Das}, journal={ArXiv}, year={2021}, volume={abs/2102.00534} }
The use of Deep Learning hardware algorithms for embedded applications is characterized by challenges such as constraints on device power consumption, availability of labeled data, and limited internet bandwidth for frequent training on cloud servers. To enable low power implementations, we consider efficient bitwidth reduction and pruning for the class of Deep Learning algorithms known as Discriminative Deep Belief Networks (DDBNs) for embedded-device classification tasks. We train DDBNs with…
Figures and Tables from this paper
References
SHOWING 1-10 OF 19 REFERENCES
AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks
- Computer Science2019 International Joint Conference on Neural Networks (IJCNN)
- 2019
The AX-DBN methodology proposed in this paper is proposed, and experimental results across several network architectures that show significant power savings under a user-specified accuracy loss constraint with respect to ideal full precision implementations are presented.
A Fast Learning Algorithm for Deep Belief Nets
- Computer ScienceNeural Computation
- 2006
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Application of Deep Belief Networks for Natural Language Understanding
- Computer ScienceIEEE/ACM Transactions on Audio, Speech, and Language Processing
- 2014
The plain DBN-based model gives a call-routing classification accuracy that is equal to the best of the other models, however, using additional unlabeled data for DBN pre-training and combining Dbn-based learned features with the original features provides significant gains over SVMs, which, in turn, performed better than both MaxEnt and Boosting.
Learning Algorithms for the Classification Restricted Boltzmann Machine
- Computer ScienceJ. Mach. Learn. Res.
- 2012
It is argued that RBMs can provide a self-contained framework for developing competitive classifiers and it is shown that competitive classification performances can be reached when appropriately combining discriminative and generative training objectives.
Pruning Convolutional Neural Networks for Resource Efficient Inference
- Computer ScienceICLR
- 2017
It is shown that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier.
Representation Learning: A Review and New Perspectives
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2013
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
- Computer ScienceICLR
- 2016
This work introduces "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Medical image analysis using wavelet transform and deep belief networks
- Computer ScienceExpert Syst. Appl.
- 2017
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
- Computer ScienceArXiv
- 2016
This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).
ApproxANN: An approximate computing framework for artificial neural network
- Computer Science2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)
- 2015
This work proposes a novel approximate computing framework for ANN, namely ApproxANN, which characterizes the impact of neurons on the output quality in an effective and efficient manner, and judiciously determine how to approximate the computation and memory accesses of certain less critical neurons to achieve the maximum energy efficiency gain under a given quality constraint.