A Deep Convolutional Auto-Encoder with Pooling - Unpooling Layers in Caffe

@article{Turchenko2019ADC,
  title={A Deep Convolutional Auto-Encoder with Pooling - Unpooling Layers in Caffe},
  author={Volodymyr Turchenko and Eric Chalmers and Artur Luczak},
  journal={ArXiv},
  year={2019},
  volume={abs/1701.04949}
}
This paper presents the development of several models of a deep convolutional auto-encoder in the Caffe deep learning framework and their experimental evaluation on the example of MNIST dataset. [] Key Result The best results were provided by a model where the encoder part contains convolutional and pooling layers, followed by an analogous decoder part with deconvolution and unpooling layers without the use of switch variables in the decoder part. The paper also discusses practical details of the creation of…
Soft-Autoencoder and Its Wavelet Shrinkage Interpretation
TLDR
A new type of convolutional autoencoders, termed as Soft-Autoencoder (Soft-AE), in which the activations of encoding layers are implemented with adaptable soft-thresholding units while decoding layers are realized with linear units, which can be naturally interpreted as a learned cascaded wavelet shrinkage system.
Unsupervised representation learning based on the deep multi-view ensemble learning
TLDR
This work proposes a novel deep multi-view ensemble model that restricts the number of connections between successive layers while enhancing discriminatory power using a data-driven approach to deal with feature learning problems.
Non-negative Autoencoder with Simplified Random Neural Network
  • Yonghua Yin, E. Gelenbe
  • Computer Science
    2019 International Joint Conference on Neural Networks (IJCNN)
  • 2019
A new shallow multi-layer auto-encoder that combines the spiking Random Neural Network (RNN) with the network architecture typically used in deep-learning, is proposed with a learning algorithm
Implementation of Convolutional Autoencoder and Learnings on Image Compression
TLDR
How much information the image data will lose during the compression is depending on the rate of compression, by building a convolutional autoencoder and a Convolutional neural network.
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via Accelerated Downsampling
  • Wenchi Ma, Miao Yu, Kaidong Li, Guanghui Wang
  • Computer Science
    2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI)
  • 2020
TLDR
The fundamental reason that impedes the scale-up of layer-wise learning is the relatively poor separability of the feature space in shallow layers, and a downsampling acceleration approach is proposed to weaken the poor learning of shallow layers so as to transfer the learning emphasis to deep feature space where the separability matches better with the supervision restraint.
HRTF Representation with Convolutional Auto-encoder
TLDR
This paper has put forward an HRTF representation modeling based on convolutional auto-encoder (CAE), which is some type of auto- encoder that contains convolutionals in the encoder part and deconvolution layers in the decoder part that provides very good results on dimensionality reduction of HRTF.
Using convolutional neural network autoencoders to understand unlabeled data
TLDR
This work demonstrates that deep convolutional autoencoders can comfortably perform clustering, dimensionality reduction for visualization, and anomaly detection tasks either directly or through manipulations of the latent space in a limited data setting.
Network Traffic Anomaly Detection Method Based on CAE and LSTM
TLDR
The use of Multi-CAE greatly improves the feature extraction capability, and combined with the long short-term memory network to extract temporal features, the effective features extracted in this paper are more comprehensive and less losses compared to the models used in other researches.
Self-supervised Vector-Quantization in Visual SLAM using Deep Convolutional Autoencoders
TLDR
AE-FABMAP, a new self-supervised bag of words-based SLAM method that integrated into the state of the art long range appearance based visual bag of word SLAM, FABMAP2, also in ORB-SLAM, and experiments show that autoencoders are far more efied than semi- supervised methods, in terms of speed and memory consumption.
...
...

References

SHOWING 1-10 OF 107 REFERENCES
Creation of a deep convolutional auto-encoder in Caffe
  • V. Turchenko, A. Luczak
  • Computer Science
    2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)
  • 2017
TLDR
The development of a deep (stacked) convolutional auto-encoder in the Caffe deep learning framework is presented and comparable accuracy of dimensionality reduction in comparison with a classic autoencoder on the example of MNIST dataset is shown.
Striving for Simplicity: The All Convolutional Net
TLDR
It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.
Stacked What-Where Auto-encoders
We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised
Caffe: Convolutional Architecture for Fast Feature Embedding
TLDR
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
TLDR
It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold.
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction
TLDR
A novel convolutional auto-encoder (CAE) for unsupervised feature learning that initializing a CNN with filters of a trained CAE stack yields superior performance on a digit and an object recognition benchmark.
Adaptive deconvolutional networks for mid and high level feature learning
TLDR
A hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling, relying on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches.
Adversarial Autoencoders
TLDR
This paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization, and performed experiments on MNIST, Street View House Numbers and Toronto Face datasets.
Deep Learning with Hierarchical Convolutional Factor Analysis
Unsupervised multilayered (“deep”) models are considered for imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores.
...
...