Reducing the Dimensionality of Data with Neural Networks

@article{Hinton2006ReducingTD,
  title={Reducing the Dimensionality of Data with Neural Networks},
  author={Geoffrey E. Hinton and Ruslan Salakhutdinov},
  journal={Science},
  year={2006},
  volume={313},
  pages={504 - 507}
}
High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than… Expand
Dimensionality Reduction Using Neural Networks
Dimensionality reduction is a method of obtaining the information from a high dimensionalfeature space using fewer intrinsic dimensions. Reducing dimensionality of high dimensional datais good forExpand
Training neural networks on high-dimensional data using random projection
TLDR
This work studies two variants of RP layers: one where the weights are fixed, and one where they are fine-tuned during network training, and demonstrates that DNNs with RP layer achieve competitive performance on high-dimensional real-world datasets. Expand
A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices
TLDR
A new deep neural network architecture is presented, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder, demonstrating experimentally that classification performance does not deteriorate if the autoenCoder is replaced with a computationally-efficient sparse dimensionality reduction matrix. Expand
Index-learning of unsupervised low dimensional embeddings
We introduce a simple unsupervised learning method for creating low-dimensional embeddings. Autoencoders work by simultaneously learning how to encode the input to a low dimensional representationExpand
Dimensionality Reduction Applied to Time Response of Linear Systems Using Autoencoders
TLDR
Results show that is possible to use a deep autoencoder to capture the dynamical behavior of a dynamical system in its latent layer and to perform a compact representation of the time response of linear systems. Expand
Self-supervised Dimensionality Reduction with Neural Networks and Pseudo-labeling
TLDR
This work proposes a deep learning DR method called Self-Supervised Network Projection (SSNP) which does DR based on pseudo-labels obtained from clustering, and shows that SSNP produces better cluster separation than autoencoders, has out-of-sample, inverse mapping, and clustering capabilities, and is very fast and easy to use. Expand
Dimensionality compression and expansion in Deep Neural Networks
TLDR
This work contributes by shedding light on the success of deep neural networks in disentangling data in high-dimensional space while achieving good generalization, and invites new learning strategies focused on optimizing measurable geometric properties of learned representations, beginning with their intrinsic dimensionality. Expand
Deep Bottleneck Classifiers in Supervised Dimension Reduction
TLDR
This work proposes using a deep bottlenecked neural network in supervised dimension reduction, instead of trying to reproduce the data, the network is trained to perform classification. Expand
From Principal Subspaces to Principal Components with Linear Autoencoders
  • E. Plaut
  • Mathematics, Computer Science
  • ArXiv
  • 2018
TLDR
This paper shows how to recover the loading vectors from the autoencoder weights, which are not identical to the principal component loading vectors. Expand
1 DIMENSIONALITY REDUCTION USING NEURAL NETWORKS
A multi-layer neural network with multiple hidden layers was trained as an autoencoder using steepest descent, scaled conjugate gradient and alopex algorithms. These algorithms were used in differentExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Dimension Reduction by Local Principal Component Analysis
TLDR
A local linear approach to dimension reduction that provides accurate representations and is fast to compute is developed and it is shown that the local linear techniques outperform neural network implementations. Expand
Nonlinear dimensionality reduction by locally linear embedding.
TLDR
Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds. Expand
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand
A global geometric framework for nonlinear dimensionality reduction.
TLDR
An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure. Expand
Replicator neural networks for universal optimal source coding.
TLDR
A theorem shows that a class of replicator networks can, through the minimization of mean squared reconstruction error, carry out optimal data compression for arbitrary data vector sources. Expand
Learning sets of filters using back-propagation
TLDR
Further research is described on back-propagation for layered networks of deterministic, neuron-like units and an example in which a network learns a set of filters that enable it to discriminate formant-like patterns in the presence of noise. Expand
In Advances in Neural Information Processing Systems
Bill Baird { Publications References 1] B. Baird. Bifurcation analysis of oscillating neural network model of pattern recognition in the rabbit olfactory bulb. In D. 3] B. Baird. Bifurcation analysisExpand
Neural networks and physical systems with emergent collective computational abilities.
  • J. Hopfield
  • Computer Science, Medicine
  • Proceedings of the National Academy of Sciences of the United States of America
  • 1982
TLDR
A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. Expand
Neural Computation
Lecture Notes for the MSc/DTC module. The brain is a complex computing machine which has evolved to give the ttest output to a given input. Neural computation has as goal to describe the function ofExpand
Machine learning
TLDR
Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Expand
...
1
2
3
4
5
...