Targeted Deep Learning: Framework, Methods, and Applications

  title={Targeted Deep Learning: Framework, Methods, and Applications},
  author={Shih-Ting Huang and Johannes Lederer},
Deep learning systems are typically designed to perform for a wide range of test inputs. For example, deep learning systems in autonomous cars are supposed to deal with traffic situations for which they were not specifically trained. In general, the ability to cope with a broad spectrum of unseen test inputs is called generalization. Generalization is definitely important in applications where the possible test inputs are known but plentiful or simply unknown, but there are also cases where the… 

Figures from this paper

Label-Free Supervision of Neural Networks with Physics and Domain Knowledge

This work introduces a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs, derived from prior domain knowledge.

A survey on Image Data Augmentation for Deep Learning

This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing DataAugmentation, a data-space solution to the problem of limited data.

On the importance of initialization and momentum in deep learning

It is shown that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs to levels of performance that were previously achievable only with Hessian-Free optimization.

Personalization of Deep Learning

It is shown that both ``curriculuum learning'' and ``personalized'' data augmentation lead to improved performance on data of an individual, although this comes at the cost of reduced performance on a more general, broader dataset.

Activation Functions in Artificial Neural Networks: A Systematic Overview

This paper provides an analytic yet up-to-date overview of popular activation functions and their properties, which makes it a timely resource for anyone who studies or applies neural networks.

Deep Multimodal Learning: A Survey on Recent Advances and Trends

This work first classify deep multimodal learning architectures and then discusses methods to fuse learned multi-modal representations in deep-learning architectures.

Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning

This paper introduces a problem formulation where two agents are tasked with learning multiple skills by sharing information and uses the skills that were learned by both agents to train invariant feature spaces that can be used to transfer other skills from one agent to another.

Deep Learning for Classical Japanese Literature

This work introduces Kuz Kushiji-MNIST, a dataset which focuses on Kuzushiji (cursive Japanese), as well as two larger, more challenging datasets, KuzUSHiji-49 and Kuzushaiji-Kanji, which are intended to engage the machine learning community into the world of classical Japanese literature.

A Data Augmentation Scheme for Geometric Deep Learning in Personalized Brain–Computer Interfaces

A novel data augmentation approach that combines the multiplex network modelling of multichannel signal with a graph variant of the classical Empirical Mode Decomposition (EMD) and which proves to be a strong asset when combined with Graph Convolutional Neural Networks (GCNNs).

A Comprehensive Survey on Transfer Learning

This survey attempts to connect and systematize the existing transfer learning research studies, as well as to summarize and interpret the mechanisms and the strategies of transfer learning in a comprehensive way, which may help readers have a better understanding of the current research status and ideas.