• Corpus ID: 197677687

1 Robust Deep Autoencoders with ` 1 Regularization

@inproceedings{Zhou20171RD,
  title={1 Robust Deep Autoencoders with ` 1 Regularization},
  author={Chong Zhou},
  year={2017}
}
Deep autoencoders, and other deep neural networks, have demonstrated their e‚ectiveness in discovering non-linear features across many problem domains. However, in many real-world problems, large outliers and pervasive noise are commonplace, and one may not have access to clean training data as required by standard deep denoising autoencoders. Herein, we demonstrate novel extensions to deep autoencoders which not only maintain a deep autoencoders’ ability to discover high quality, non-linear… 

Figures from this paper

References

SHOWING 1-10 OF 28 REFERENCES

Research on denoising sparse autoencoder

The results suggest that different autoencoders mentioned in this paper have some close relation and the model the authors researched can extract interesting features which can reconstruct original data well, and all results show a promising approach to utilizing the proposed autoencoder to build deep models.

Extracting and composing robust features with denoising autoencoders

This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.

Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.

Robust feature learning by improved auto-encoder from non-Gaussian noised images

Experimental results show that, compared with the traditional auto-encoders, the proposed method learns robust features, improves classification accuracy and reduces the reconstruction error, which demonstrates thatThe proposed method is capable of learning robust features on noisy data.

Robust feature learning by stacked autoencoder with maximum correntropy criterion

A robust stacked autoencoder based on maximum correntropy criterion (MCC) to deal with the data containing non-Gaussian noises and outliers is proposed and Experimental results show that R-SAE is capable of learning robust features on noisy data.

Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold.

Extracting deep bottleneck features using stacked auto-encoders

It is found that increasing the number of auto-encoders in the network produces more useful features, but requires pre-training, especially when little training data is available.

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.

Solving Structured Sparsity Regularization with Proximal Methods

It is shown that by perturbing the objective function by a small strictly convex term the authors often reduce substantially the number of required computations without affecting the prediction performance of the obtained solution.

Learning representations by back-propagating errors

Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.