Targeted Deep Learning: Framework, Methods, and Applications
@article{Huang2021TargetedDL, title={Targeted Deep Learning: Framework, Methods, and Applications}, author={Shih-Ting Huang and Johannes Lederer}, journal={ArXiv}, year={2021}, volume={abs/2105.14052} }
Deep learning systems are typically designed to perform for a wide range of test inputs. For example, deep learning systems in autonomous cars are supposed to deal with traffic situations for which they were not specifically trained. In general, the ability to cope with a broad spectrum of unseen test inputs is called generalization. Generalization is definitely important in applications where the possible test inputs are known but plentiful or simply unknown, but there are also cases where the…
44 References
Label-Free Supervision of Neural Networks with Physics and Domain Knowledge
- Computer ScienceAAAI
- 2017
This work introduces a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs, derived from prior domain knowledge.
A survey on Image Data Augmentation for Deep Learning
- Computer ScienceJournal of Big Data
- 2019
This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing DataAugmentation, a data-space solution to the problem of limited data.
On the importance of initialization and momentum in deep learning
- Computer ScienceICML
- 2013
It is shown that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs to levels of performance that were previously achievable only with Hessian-Free optimization.
Personalization of Deep Learning
- Computer ScienceData Science – Analytics and Applications
- 2021
It is shown that both ``curriculuum learning'' and ``personalized'' data augmentation lead to improved performance on data of an individual, although this comes at the cost of reduced performance on a more general, broader dataset.
Activation Functions in Artificial Neural Networks: A Systematic Overview
- Computer ScienceArXiv
- 2021
This paper provides an analytic yet up-to-date overview of popular activation functions and their properties, which makes it a timely resource for anyone who studies or applies neural networks.
Deep Multimodal Learning: A Survey on Recent Advances and Trends
- Computer ScienceIEEE Signal Processing Magazine
- 2017
This work first classify deep multimodal learning architectures and then discusses methods to fuse learned multi-modal representations in deep-learning architectures.
Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning
- Computer ScienceICLR
- 2017
This paper introduces a problem formulation where two agents are tasked with learning multiple skills by sharing information and uses the skills that were learned by both agents to train invariant feature spaces that can be used to transfer other skills from one agent to another.
Deep Learning for Classical Japanese Literature
- Computer ScienceArXiv
- 2018
This work introduces Kuz Kushiji-MNIST, a dataset which focuses on Kuzushiji (cursive Japanese), as well as two larger, more challenging datasets, KuzUSHiji-49 and Kuzushaiji-Kanji, which are intended to engage the machine learning community into the world of classical Japanese literature.
A Data Augmentation Scheme for Geometric Deep Learning in Personalized Brain–Computer Interfaces
- Computer ScienceIEEE Access
- 2020
A novel data augmentation approach that combines the multiplex network modelling of multichannel signal with a graph variant of the classical Empirical Mode Decomposition (EMD) and which proves to be a strong asset when combined with Graph Convolutional Neural Networks (GCNNs).
A Comprehensive Survey on Transfer Learning
- Computer ScienceProceedings of the IEEE
- 2021
This survey attempts to connect and systematize the existing transfer learning research studies, as well as to summarize and interpret the mechanisms and the strategies of transfer learning in a comprehensive way, which may help readers have a better understanding of the current research status and ideas.