• Corpus ID: 231741187

Generative and Discriminative Deep Belief Network Classifiers: Comparisons Under an Approximate Computing Framework

@article{Ruan2021GenerativeAD,
  title={Generative and Discriminative Deep Belief Network Classifiers: Comparisons Under an Approximate Computing Framework},
  author={Siqiao Ruan and Ian Colbert and Kenneth Kreutz-Delgado and Srinjoy Das},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.00534}
}
The use of Deep Learning hardware algorithms for embedded applications is characterized by challenges such as constraints on device power consumption, availability of labeled data, and limited internet bandwidth for frequent training on cloud servers. To enable low power implementations, we consider efficient bitwidth reduction and pruning for the class of Deep Learning algorithms known as Discriminative Deep Belief Networks (DDBNs) for embedded-device classification tasks. We train DDBNs with… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 19 REFERENCES
AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks
TLDR
The AX-DBN methodology proposed in this paper is proposed, and experimental results across several network architectures that show significant power savings under a user-specified accuracy loss constraint with respect to ideal full precision implementations are presented.
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Application of Deep Belief Networks for Natural Language Understanding
TLDR
The plain DBN-based model gives a call-routing classification accuracy that is equal to the best of the other models, however, using additional unlabeled data for DBN pre-training and combining Dbn-based learned features with the original features provides significant gains over SVMs, which, in turn, performed better than both MaxEnt and Boosting.
Learning Algorithms for the Classification Restricted Boltzmann Machine
TLDR
It is argued that RBMs can provide a self-contained framework for developing competitive classifiers and it is shown that competitive classification performances can be reached when appropriately combining discriminative and generative training objectives.
Pruning Convolutional Neural Networks for Resource Efficient Inference
TLDR
It is shown that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier.
Representation Learning: A Review and New Perspectives
TLDR
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
TLDR
This work introduces "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Medical image analysis using wavelet transform and deep belief networks
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
TLDR
This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).
ApproxANN: An approximate computing framework for artificial neural network
TLDR
This work proposes a novel approximate computing framework for ANN, namely ApproxANN, which characterizes the impact of neurons on the output quality in an effective and efficient manner, and judiciously determine how to approximate the computation and memory accesses of certain less critical neurons to achieve the maximum energy efficiency gain under a given quality constraint.
...
1
2
...