# Large-scale deep unsupervised learning using graphics processors

@inproceedings{Raina2009LargescaleDU, title={Large-scale deep unsupervised learning using graphics processors}, author={Rajat Raina and Anand Madhavan and A. Ng}, booktitle={ICML '09}, year={2009} }

The promise of unsupervised learning methods lies in their potential to use vast amounts of unlabeled data to learn complex, highly nonlinear models with millions of free parameters. We consider two well-known unsupervised learning models, deep belief networks (DBNs) and sparse coding, that have recently been applied to a flurry of machine learning applications (Hinton & Salakhutdinov, 2006; Raina et al., 2007). Unfortunately, current learning algorithms for both models are too slow for large…

## 646 Citations

Partitioning Large Scale Deep Belief Networks Using Dropout

- Computer ScienceArXiv
- 2015

This work considers a well-known machine learning model, deep belief networks (DBNs), and proposes an approach that can use the computing clusters in a distributed environment to train large models, while the dense matrix computations within a single machine are sped up using graphics processors (GPU).

Large Scale Distributed Deep Networks

- Computer ScienceNIPS
- 2012

This paper considers the problem of training a deep network with billions of parameters using tens of thousands of CPU cores and develops two algorithms for large-scale distributed training, Downpour SGD and Sandblaster L-BFGS, which increase the scale and speed of deep network training.

Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists

- Computer ScienceFront. Psychol.
- 2013

It is shown how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python).

Deep learning systems as complex networks

- Computer ScienceJournal of Complex Networks
- 2019

This article proposes to study deep belief networks using techniques commonly employed in the study of complex networks, in order to gain some insights into the structural and functional properties of the computational graph resulting from the learning process.

Large-scale restricted boltzmann machines on single GPU

- Computer Science2013 IEEE International Conference on Big Data
- 2013

A novel memory efficient algorithm on single GPU is proposed that can train large-scale RBMs without size restriction and preserve the performance gain of GPU parallel computation.

Unsupervised learning of hierarchical representations with convolutional deep belief networks

- Computer ScienceCommun. ACM
- 2011

The convolutional deep belief network is presented, a hierarchical generative model that scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

- Computer ScienceICML '09
- 2009

The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.

Large-Scale Deep Belief Nets With MapReduce

- Computer ScienceIEEE Access
- 2014

This paper presents a distributed learning paradigm for the RBMs and the backpropagation algorithm using MapReduce, a popular parallel programming model, and demonstrates that the distributedRBMs and DBNs are amenable to large-scale data with a good performance in terms of accuracy and efficiency.

A Large-Scale Architecture for Restricted Boltzmann Machines

- Computer Science2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines
- 2010

This paper presents a highly scalable architecture for Deep Belief Net processing on hardware systems that can handle hundreds of boards, if not more, of customized logic with near linear performance increase, and illustrates how it can easily support sparse networks with dense regions of connections between neighboring sets of neurons.

Building high-level features using large scale unsupervised learning

- Computer Science2013 IEEE International Conference on Acoustics, Speech and Signal Processing
- 2013

Contrary to what appears to be a widely-held intuition, the experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not.

## References

SHOWING 1-10 OF 42 REFERENCES

Greedy Layer-Wise Training of Deep Networks

- Computer ScienceNIPS
- 2006

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

- Computer ScienceICML '09
- 2009

The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.

A Fast Learning Algorithm for Deep Belief Nets

- Computer ScienceNeural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

Sparse deep belief net model for visual area V2

- Computer ScienceNIPS
- 2007

An unsupervised learning model is presented that faithfully mimics certain properties of visual area V2 and the encoding of these more complex "corner" features matches well with the results from the Ito & Komatsu's study of biological V2 responses, suggesting that this sparse variant of deep belief networks holds promise for modeling more higher-order features.

Efficient sparse coding algorithms

- Computer ScienceNIPS
- 2006

These algorithms are applied to natural images and it is demonstrated that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.

Learning Sparse Overcomplete Codes for Images

- Computer ScienceJ. VLSI Signal Process.
- 2007

A survey of algorithms that perform dictionary learning and sparse coding is presented and a modified version of the FOCUSS algorithm is presented that can find a non-negative sparse coding in some cases.

Map-Reduce for Machine Learning on Multicore

- Computer ScienceNIPS
- 2006

This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.

Emergence of simple-cell receptive field properties by learning a sparse code for natural images

- Computer ScienceNature
- 1996

It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.

Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition

- Computer ScienceArXiv
- 2010

This work proposes a simple and efficient algorithm to learn basis functions, which provides a fast and smooth approximator to the optimal representation, achieving even better accuracy than exact sparse coding algorithms on visual object recognition tasks.

Self-taught learning: transfer learning from unlabeled data

- Computer ScienceICML '07
- 2007

An approach to self-taught learning that uses sparse coding to construct higher-level features using the unlabeled data to form a succinct input representation and significantly improve classification performance.