Deep Learning

  title={Deep Learning},
  author={Yann LeCun and Yoshua Bengio and Geoffrey Hinton},
Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. [] Key Method Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification.

On the Behavior of Convolutional Nets for Feature Extraction

This paper statistically measure the discriminative power of every single feature found within a deep CNN, when used for characterizing every class of 11 datasets, and finds that all CNN features can be used for knowledge representation purposes both by their presence or by their absence.

Neural Networks for Survey Researchers

This article describes what neural networks are and how they learn, considers their strengths and weaknesses as a machine learning approach, and illustrates how they perform on a classification task predicting survey response from respondents’ (and nonrespondents’) prior known demographics.

Performance Analysis of Deep Neural Networks Using Computer Vision

The proposed work outperforms the previous techniques in predicting the dependent variables, learning rate, image count, image mean, performance analysis of loss rate and learning rate during training, performance Analysis of Loss with respect to Epoch for Training, Validation and Accuracy.

A Survey on Computer Vision Architectures for Large Scale Image Classification using Deep Learning

  • D. HimabinduS. Kumar
  • Computer Science
    International Journal of Advanced Computer Science and Applications
  • 2021
A set of state-of-the-art models in image classification evolved from the birth of convolutions to present ongoing research, illustrated with architecture schema, implementation details, parametric tuning and their performance.

Deep Learning in Kernel Machines

Three deep kernel learning models are developed that analyze the behavior of arc-cosine kernel and modeled a scalable deep kernel machine by incorporating arc- cosine kernel in core vector machines, and a scalableDeep learning architecture with unsupervised feature extraction with promising results.

Deep Learning: A Primer for Radiologists.

  • G. ChartrandP. Cheng A. Tang
  • Computer Science
    Radiographics : a review publication of the Radiological Society of North America, Inc
  • 2017
The key concepts of deep learning for clinical radiologists are reviewed, technical requirements are discussed, emerging applications in clinical radiology are described, and limitations and future directions in this field are outlined.

Taming Deep Belief Networks

A generative ANN model called Restricted Boltzmann Machine (RBM) and an associated deep learning stack of RBMs called Deep Belief Networks (DBN) are explored and an initial evaluation of the suitability of DBNs is performed on two types of problems, including the general problem of supervised learning with real-valued target variables.

Deep learning: a branch of machine learning

A broad writing survey is completed and the utilization of deep learning in different fields is reviewed and how and in what real applications deep learning algorithms have been used are shown.

Towards the effectiveness of Deep Convolutional Neural Network based Fast Random Forest Classifier

The excellent performance obtained by the proposed DCNN based feature selection with FRF classifier on high dimensional datasets makes it a fast and accurate classifier in comparison the state-of-the-art.

A Review of Deep Learning Algorithms and Their Applications in Healthcare

A review and a checkpoint to systemize the popular algorithms of deep learning and to encourage further innovation regarding their applications and to introduce detailed information on how to apply several deep learning algorithms in healthcare, such as in relation to the COVID-19 pandemic.



Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features.

Machine learning - a probabilistic perspective

  • K. Murphy
  • Computer Science
    Adaptive computation and machine learning series
  • 2012
This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

Measuring Invariances in Deep Networks

A number of empirical tests are proposed that directly measure the degree to which these learned features are invariant to different input transformations and find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images and convolutional deep belief networks learn substantially more invariant Features in each layer.

Sparse Feature Learning for Deep Belief Networks

This work proposes a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the information content of the representation, and describes a novel and efficient algorithm to learn sparse representations.

Gradient-based learning applied to document recognition

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.

Zero-Shot Learning Through Cross-Modal Transfer

This work introduces a model that can recognize objects in images even if no training data is available for the object class, and uses novelty detection methods to differentiate unseen classes from seen classes.

Building high-level features using large scale unsupervised learning

  • Quoc V. LeM. Ranzato A. Ng
  • Computer Science
    2013 IEEE International Conference on Acoustics, Speech and Signal Processing
  • 2013
Contrary to what appears to be a widely-held intuition, the experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not.

Beyond simple features: A large-scale feature search approach to unconstrained face recognition

This work demonstrates a large-scale feature search approach to generating new, more powerful feature representations in which a multitude of complex, nonlinear, multilayer neuromorphic feature representations are randomly generated and screened to find those best suited for the task at hand.

Reading Digits in Natural Images with Unsupervised Feature Learning

A new benchmark dataset for research use is introduced containing over 600,000 labeled digits cropped from Street View images, and variants of two recently proposed unsupervised feature learning methods are employed, finding that they are convincingly superior on benchmarks.