Reducing the Dimensionality of Data with Neural Networks

@article{Hinton2006ReducingTD,
  title={Reducing the Dimensionality of Data with Neural Networks},
  author={Geoffrey E. Hinton and Ruslan Salakhutdinov},
  journal={Science},
  year={2006},
  volume={313},
  pages={504 - 507}
}
High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than… 

Dimensionality Reduction Using Neural Networks

TLDR
Deep autoencoders showed signs of improvement when pretrained over the ones without pretraining, and the tuning which followed the pretraining approach was able to reduce the data dimensionality very efficiently.

Training neural networks on high-dimensional data using random projection

TLDR
This work studies two variants of RP layers: one where the weights are fixed, and one where they are fine-tuned during network training, and demonstrates that DNNs with RP layer achieve competitive performance on high-dimensional real-world datasets.

A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices

TLDR
A new deep neural network architecture is presented, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder, demonstrating experimentally that classification performance does not deteriorate if the autoenCoder is replaced with a computationally-efficient sparse dimensionality reduction matrix.

Index-learning of unsupervised low dimensional embeddings

TLDR
A simple unsupervised learning method for creating low-dimensional embeddings on datasets where creating a low dimensional representation requires throwing away so much information that it is unreasonable to attempt to reconstruct the input from any kind of lowdimensional representation.

Dimensionality Reduction Applied to Time Response of Linear Systems Using Autoencoders

TLDR
Results show that is possible to use a deep autoencoder to capture the dynamical behavior of a dynamical system in its latent layer and to perform a compact representation of the time response of linear systems.

Self-supervised Dimensionality Reduction with Neural Networks and Pseudo-labeling

TLDR
This work proposes a deep learning DR method called Self-Supervised Network Projection (SSNP) which does DR based on pseudo-labels obtained from clustering, and shows that SSNP produces better cluster separation than autoencoders, has out-of-sample, inverse mapping, and clustering capabilities, and is very fast and easy to use.

Dimensionality compression and expansion in Deep Neural Networks

TLDR
This work contributes by shedding light on the success of deep neural networks in disentangling data in high-dimensional space while achieving good generalization, and invites new learning strategies focused on optimizing measurable geometric properties of learned representations, beginning with their intrinsic dimensionality.

Deep Bottleneck Classifiers in Supervised Dimension Reduction

TLDR
This work proposes using a deep bottlenecked neural network in supervised dimension reduction, instead of trying to reproduce the data, the network is trained to perform classification.

From Principal Subspaces to Principal Components with Linear Autoencoders

TLDR
This paper shows how to recover the loading vectors from the autoencoder weights, which are not identical to the principal component loading vectors.

Random Projection in Deep Neural Networks

This work investigates the ways in which deep learning methods can benefit from random projection (RP), a classic linear dimensionality reduction method. We focus on two areas where, as we have
...

References

SHOWING 1-10 OF 31 REFERENCES

Dimension Reduction by Local Principal Component Analysis

TLDR
A local linear approach to dimension reduction that provides accurate representations and is fast to compute is developed and it is shown that the local linear techniques outperform neural network implementations.

Nonlinear dimensionality reduction by locally linear embedding.

TLDR
Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.

A Fast Learning Algorithm for Deep Belief Nets

TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

A global geometric framework for nonlinear dimensionality reduction.

TLDR
An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.

Replicator neural networks for universal optimal source coding.

TLDR
A theorem shows that a class of replicator networks can, through the minimization of mean squared reconstruction error, carry out optimal data compression for arbitrary data vector sources.

In Advances in Neural Information Processing Systems

Bill Baird { Publications References 1] B. Baird. Bifurcation analysis of oscillating neural network model of pattern recognition in the rabbit olfactory bulb. In D. 3] B. Baird. Bifurcation analysis

Neural networks and physical systems with emergent collective computational abilities.

  • J. Hopfield
  • Computer Science
    Proceedings of the National Academy of Sciences of the United States of America
  • 1982
TLDR
A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.

Neural Computation

TLDR
The nervous system is able to develop by combining on one hand a only limited amount of genetic information and, on the other hand, the input it receives, and it might be possible to develop a brain from there.

Proceedings Seventh International Conference on Document Analysis and Recognition

  • Computer Science
    Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings.
  • 2003
The following topics are dealt with: document analysis and recognition; multiple classifiers; feature analysis; document understanding; hidden Markov models; text segmentation; character recognition;