# A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices

@inproceedings{Matsumoto2016ADN, title={A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices}, author={Wataru Matsumoto and Manabu Hagiwara and Petros T. Boufounos and Kunihiko Fukushima and Toshisada Mariyama and Xiongxin Zhao}, booktitle={ICONIP}, year={2016} }

We present a new deep neural network architecture, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder. We regard autoencoders as an information-preserving dimensionality reduction method, similar to random projections in compressed sensing. Thus, exploiting recent theory on sparse matrices for dimensionality reduction, we demonstrate experimentally that classification performance does not…

## 4 Citations

Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains

- Computer Science, BiologyPLoS Comput. Biol.
- 2016

This method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images, and first-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates.

State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats

- Biology, Computer ScienceFront. Neurosci.
- 2017

It is found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system.

A K-Means Clustering Approach for PCA-Based Web Service QoS Prediction

- Computer Science2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)
- 2019

A k-means clustering method is presented to predict Web service QoS through Principal Component Analysis (PCA), and results indicate that the method achieves higher prediction accuracy in sparse matrix than other conventional methods.

Fuzzy Removing Redundancy Restricted Boltzmann Machine: Improving Learning Speed and Classification Accuracy

- Computer ScienceIEEE Transactions on Fuzzy Systems
- 2020

F fuzzy removing redundancy restricted Boltzmann machine (F3RBM) is developed, which improves the classification accuracy and learning speed than general classifier, and the experimental results show that the feature extraction capability of FRBM and F3R BM is better than that of RBM.

## References

SHOWING 1-10 OF 15 REFERENCES

Reducing the Dimensionality of Data with Neural Networks

- Computer ScienceScience
- 2006

This work describes an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

A Fast Learning Algorithm for Deep Belief Nets

- Computer ScienceNeural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

Sparse Recovery Using Sparse Random Matrices

- Computer ScienceLATIN
- 2010

An overview of the results in the area of sequential Sparse Matching Pursuit, and describes a new algorithm, called “SSMP”, which works well on real data, with the recovery quality often outperforming that of more complex algorithms, such as l1 minimization.

Random Projections of Smooth Manifolds

- Computer Science, MathematicsFound. Comput. Math.
- 2009

Abstract
We propose a new approach for nonadaptive dimensionality reduction of manifold-modeled data, demonstrating that a small number of random linear projections can preserve key information about…

Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices

- Computer ScienceIEEE Transactions on Information Theory
- 2010

This paper provides sharper necessary conditions for exact support recovery using general (including non-Gaussian) dense measurement matrices, and proves necessary conditions on the number of observations n required for asymptotically reliable recovery using a class of ¿-sparsified measurementMatrices.

Compressed sensing

- MathematicsIEEE Transactions on Information Theory
- 2006

It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.

Stable signal recovery from incomplete and inaccurate measurements

- Computer Science
- 2005

It is shown that it is possible to recover x0 accurately based on the data y from incomplete and contaminated observations.

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

- Computer ScienceIEEE Transactions on Information Theory
- 2006

If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.

A Simple Proof of the Restricted Isometry Property for Random Matrices

- Mathematics
- 2008

Abstract
We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main…

Gradient-based learning applied to document recognition

- Computer ScienceProc. IEEE
- 1998

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.