Auto-association by multilayer perceptrons and singular value decomposition

@article{Bourlard2004AutoassociationBM,
  title={Auto-association by multilayer perceptrons and singular value decomposition},
  author={Herv{\'e} Bourlard and Yves Kamp},
  journal={Biological Cybernetics},
  year={2004},
  volume={59},
  pages={291-294}
}
The multilayer perceptron, when working in auto-association mode, is sometimes considered as an interesting candidate to perform data compression or dimensionality reduction of the feature space in information processing applications. The present paper shows that, for auto-association, the nonlinearities of the hidden units are useless and that the optimal parameter values can be derived directly by purely linear techniques relying on singular value decomposition and low rank matrix… 
Edge-backpropagation for noisy logo recognition
Incremental Learning of Auto-Association Multilayer Perceptrons Network
TLDR
A new algorithm to reduce the time of updating the weights of auto-association multilayer perceptrons network by modifying the singular value decomposition which has been used in the batch algorithm to update the weights whenever a new row is added to the input matrix.
Adaptive Nonlinear Auto-Associative Modeling Through Manifold Learning
TLDR
Experiments on character and digit databases show that the advantages of the proposed ANAM algorithm, based on Locally Linear Embedding algorithm, have several advantages.
Analysis and pruning of nonlinear auto-association networks
TLDR
Simulations have shown that the hidden layer neurons of this network operate mainly in their linear region, and by studying the statistical relations governing the operation of such a network, the nearly linear behaviour of the sigmoidal hidden neurons was verified.
Principal component analysis of fuzzy data using autoassociative neural networks
TLDR
This paper describes an extension of principal component analysis allowing the extraction of a limited number of relevant features from high-dimensional fuzzy data, and the concept of correlation coefficient is extended to fuzzy numbers, allowing the interpretation of the new features in terms of the original variables.
Interpretation Method of Nonlinear Multilayer Principal Component Analysis by Using Sparsity and Hierarchical Clustering
  • N. Koda, Sumio Watanabe
  • Computer Science
    2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)
  • 2016
TLDR
A new interpretation method of NMPCA is proposed by extracting a few essential structures from many differently trained and locally optimal parameters so that users can understand the extracted results.
Using autoencoder to facilitate information retention for data dimension reduction
TLDR
The simplest autoencoder structure is used as the preprocessing of Support Vector Machine (SVM) to solve the problem of loss of information during the processing of dimensionality reduction.
Learning Sparse Features with an Auto-Associator
TLDR
This chapter presents a new non-linear explicit sparse representation method referred to as Sparse Auto-Associator (SAA), integrating a sparsity objective within the standard auto-associator learning criterion.
Classified image compression using optimally structured auto-association networks
TLDR
An application of a set of auto-association networks with linear output neurons and sigmoidal hidden neurons for classified image compression is carried out, showing excellent performance of the proposed architecture in reproducing high-quality images at a low bit rate.
...
...

References

SHOWING 1-10 OF 15 REFERENCES
On updating the singular value decomposition
TLDR
A parallel and recursive total least squares algorithm (PRTLS) for solving a time variant TLS problem is proposed, based on the updating technique of the GMRQI-JKL to update the SVD.
Low Rank Matrices with a Given Sign Pattern
TLDR
Based on the Farkas lemma and on some other standard techniques of matrix algebra such as the cyclic Fourier transform, low rank realizations are obtained for sign matrices having certain nice combinatorial structures.
Learning the hidden structure of speech.
  • J. Elman, D. Zipser
  • Computer Science
    The Journal of the Acoustical Society of America
  • 1988
TLDR
The results of these studies demonstrate that backpropagation learning can be used with complex, natural data to identify a feature structure that can serve as the basis for both analysis and nontrivial pattern recognition.
Orthogonal Transforms for Digital Signal Processing
  • K. R. Rao, N. Ahmed
  • Computer Science
    IEEE Transactions on Systems, Man, and Cybernetics
  • 1979
TLDR
The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The fundamental principles, basic mechanisms, and formal analyses involved in the development of parallel distributed processing (PDP) systems are presented in individual chapters contributed by
Speaker dependent connected speech recognition via phonetic Markov models
  • H. Bourlard, Y. Kamp, C. Wellekens
  • Linguistics, Computer Science
    ICASSP '85. IEEE International Conference on Acoustics, Speech, and Signal Processing
  • 1985
TLDR
A method for speaker dependent connected speech recognition based on phonemic units is described, in which each phoneme is characterized by a very simple 3-state Hidden Markov Model which is trained on connected speech by a Viterbi algorithm.
Introduction to matrix computations
TLDR
Rounding-Error Analysis of Solution of Triangular Systems and of Gaussian Elimination.
Matrix computations
Least squares, singular values and matrix approximations
...
...