# Reducing the Dimensionality of Data with Neural Networks

@article{Hinton2006ReducingTD, title={Reducing the Dimensionality of Data with Neural Networks}, author={Geoffrey E. Hinton and Ruslan Salakhutdinov}, journal={Science}, year={2006}, volume={313}, pages={504 - 507} }

High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than…

## 16,166 Citations

### Dimensionality Reduction Using Neural Networks

- Computer Science
- 2007

Deep autoencoders showed signs of improvement when pretrained over the ones without pretraining, and the tuning which followed the pretraining approach was able to reduce the data dimensionality very efficiently.

### A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices

- Computer ScienceICONIP
- 2016

A new deep neural network architecture is presented, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder, demonstrating experimentally that classification performance does not deteriorate if the autoenCoder is replaced with a computationally-efficient sparse dimensionality reduction matrix.

### Index-learning of unsupervised low dimensional embeddings

- Computer Science
- 2014

A simple unsupervised learning method for creating low-dimensional embeddings on datasets where creating a low dimensional representation requires throwing away so much information that it is unreasonable to attempt to reconstruct the input from any kind of lowdimensional representation.

### Dimensionality Reduction Applied to Time Response of Linear Systems Using Autoencoders

- Computer Science2019 IEEE Colombian Conference on Applications in Computational Intelligence (ColCACI)
- 2019

Results show that is possible to use a deep autoencoder to capture the dynamical behavior of a dynamical system in its latent layer and to perform a compact representation of the time response of linear systems.

### Self-supervised Dimensionality Reduction with Neural Networks and Pseudo-labeling

- Computer ScienceVISIGRAPP
- 2021

This work proposes a deep learning DR method called Self-Supervised Network Projection (SSNP) which does DR based on pseudo-labels obtained from clustering, and shows that SSNP produces better cluster separation than autoencoders, has out-of-sample, inverse mapping, and clustering capabilities, and is very fast and easy to use.

### Dimensionality compression and expansion in Deep Neural Networks

- Computer ScienceArXiv
- 2019

This work contributes by shedding light on the success of deep neural networks in disentangling data in high-dimensional space while achieving good generalization, and invites new learning strategies focused on optimizing measurable geometric properties of learned representations, beginning with their intrinsic dimensionality.

### From Principal Subspaces to Principal Components with Linear Autoencoders

- Computer ScienceArXiv
- 2018

This paper shows how to recover the loading vectors from the autoencoder weights, which are not identical to the principal component loading vectors.

### Nonlinear Sufficient Dimension Reduction with a Stochastic Neural Network

- Computer ScienceArXiv
- 2022

A new type of stochastic neural network is proposed under a rigorous probabilistic framework and it is shown that it can be used for sufﬁcient dimension reduction for large-scale data.

### Random Projection in Deep Neural Networks

- Computer ScienceArXiv
- 2018

This work investigates the ways in which deep learning methods can benefit from random projection (RP), a classic linear dimensionality reduction method. We focus on two areas where, as we have…

### 1 DIMENSIONALITY REDUCTION USING NEURAL NETWORKS

- Computer Science

A multi-layer neural network with multiple hidden layers was trained as an autoencoder using steepest descent, scaled conjugate gradient and alopex algorithms and results indicate that while pretraining is important for obtaining good results, the pretraining approach used by Hinton et al. obtains lower RMSE than other methods.

## References

SHOWING 1-10 OF 33 REFERENCES

### Dimension Reduction by Local Principal Component Analysis

- Computer ScienceNeural Computation
- 1997

A local linear approach to dimension reduction that provides accurate representations and is fast to compute is developed and it is shown that the local linear techniques outperform neural network implementations.

### Nonlinear dimensionality reduction by locally linear embedding.

- Computer ScienceScience
- 2000

Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.

### A Fast Learning Algorithm for Deep Belief Nets

- Computer ScienceNeural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

### Replicator neural networks for universal optimal source coding.

- Computer ScienceScience
- 1995

A theorem shows that a class of replicator networks can, through the minimization of mean squared reconstruction error, carry out optimal data compression for arbitrary data vector sources.

### Neural networks and physical systems with emergent collective computational abilities.

- Computer ScienceProceedings of the National Academy of Sciences of the United States of America
- 1982

A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.

### Neural Computation

- BiologyArtificial Intelligence
- 1989

The nervous system is able to develop by combining on one hand a only limited amount of genetic information and, on the other hand, the input it receives, and it might be possible to develop a brain from there.

### Machine learning

- Computer ScienceCSUR
- 1996

Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.

### Parallel Distributed Processing Volume 1: Foundations

- Physics
- 1987

1,227,914. A die for laminated films. NORTHERN PETROCHEMICAL CO. 25 April, 1969 [26 April, 1968], No. 21298/69. Heading B5B. [Also in Division F2] A die for the extrusion of laminated plastic tubing…

### References and Notes

- Psychology
- 1999

our experimentation could eventually be used to discredit our findings, should they happen not to agree with the original observations. It seems important that all experiments in the rapidly…

### IL-13受体α2降低血吸虫病肉芽肿的炎症反应并延长宿主存活时间[英]/Mentink-Kane MM,Cheever AW，Thompson RW，et al//Proc Natl Acad Sci U S A

- Biology
- 2005

入侵病原体与宿主之间呈动态平衡，以维持病原体成功地寄生在宿主体内而不致宿主死亡，这是许多寄生虫感染的一个重要特征。包括曼氏血吸虫在内的许多蠕虫感染中，持续的炎症反应比病原体本身对宿主的危害更大，降低宿主的免疫反应具有重要意义。曼氏血吸虫感染后，宿主活化CD4^＋Th2细胞，分泌IL-4、IL-5和IL-13。最近研究表明IL-13是肝组织纤维化的重要调节因子。