Deep Learning

@article{Schulz2012DeepL,
  title={Deep Learning},
  author={Hannes Schulz and Sven Behnke},
  journal={KI - K{\"u}nstliche Intelligenz},
  year={2012},
  volume={26},
  pages={357-363}
}
Hierarchical neural networks for object recognition have a long history. In recent years, novel methods for incrementally learning a hierarchy of features from unlabeled inputs were proposed as good starting point for supervised training. These deep learning methods—together with the advances of parallel computers—made it possible to successfully attack problems that were not practical before, in terms of depth and input size. In this article, we introduce the reader to the basic concepts of… 

Survey on deep learning methods in human action recognition

TLDR
The authors present an analytical framework to classify and to evaluate these methods based on some important functional measures, and a categorisation of the state-of-the-art approaches in deep learning for human action recognition is presented.

Comparative analysis of deep learning image detection algorithms

TLDR
In an identical testing environment, YOLO-v3 outperforms SSD and Faster R-CNN, making it the best of the three algorithms.

Text normalization with convolutional neural networks

TLDR
This paper investigates and proposes a novel CNNs based text normalization method and evaluates the performance of CNNs against a variety of different long short-term memory (LSTM) and Bi-L STM architectures with the same dataset.

A Novel BGCapsule Network for Text Classification

TLDR
A novel hybrid architecture, BGCapsule, which is a Capsule model preceded by an ensemble of Bidirectional Gated Recurrent Units (BiGRU) for several text classification tasks and achieves better accuracy compared to the existing methods without the help of any external linguistic knowledge.

Incorporating Task-Oriented Representation in Text Classification

TLDR
A task-oriented representation is proposed that captures word-class relevance as task-relevant information in a CNN classification model to perform TC and Experimental results on widely used datasets show the approach outperforms comparison models.

Adaptive transfer learning in deep neural networks: Wind power prediction using knowledge transfer from region to region and between different task domains

TLDR
It is shown in case of wind power prediction that adaptive TL of the deep neural networks system can be adaptively modified as regards training on a different wind farm is concerned.

A Novel Real-Time Pedestrian Detection System on Monocular Vision

TLDR
This thesis presents a novel pedestrian detection system, ROIs cascaded Uniform LBP and improved HOG, for real-time pedestrian detection in monocular vision, which can deal with 31 fps and is evaluated by many methods and algorithms.

TransMF: Transformer-Based Multi-Scale Fusion Model for Crack Detection

TLDR
This paper proposes a novel method called Transformer-based Multi-scale Fusion Model (TransMF) for crack detection, including an Encoder Module, Decoder Module (DM) and Fusion Module (FM), which uses a hybrid of convolution blocks and Swin Transformer block to model the long-range dependencies of different parts in a crack image.

Detection of COVID-19 using deep learning on x-ray lung images

TLDR
This article proposes a method to possibly detect the COVID-19 by analyzing the X-ray images and applying a number of deep learning pre-trained models such as InceptionV3, DenseNet121, ResNet50, and VGG16, and the results are compared to determine the best performance model and accuracy with the least loss for the dataset.

References

SHOWING 1-10 OF 52 REFERENCES

Learning Object-Class Segmentation with Convolutional Neural Networks

TLDR
A convolutional network architecture that includes innovative elements, such as multiple output maps, suitable loss functions, supervised pretraining, multiscale inputs, reused outputs, and pairwise class location lters is proposed.

Unsupervised feature learning for audio classification using convolutional deep belief networks

In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

TLDR
The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features.

Adaptive deconvolutional networks for mid and high level feature learning

TLDR
A hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling, relying on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches.

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

TLDR
The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.

Convolutional Learning of Spatio-temporal Features

TLDR
A model that learns latent representations of image sequences from pairs of successive images is introduced, allowing it to scale to realistic image sizes whilst using a compact parametrization.

Multi-column deep neural networks for image classification

TLDR
On the very competitive MNIST handwriting benchmark, this method is the first to achieve near-human performance and improves the state-of-the-art on a plethora of common image classification benchmarks.

Gradient-based learning of higher-order image features

  • R. Memisevic
  • Computer Science
    2011 International Conference on Computer Vision
  • 2011
TLDR
This work shows how one can cast the problem of learning higher-order features as the issue of learning a parametric family of manifolds, which allows a variant of a de-noising autoencoder network to learn higher-orders using simple gradient based optimization.

Why Does Unsupervised Pre-training Help Deep Learning?

TLDR
The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre- training.

Deep learning via semi-supervised embedding

We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a
...