Deep Learning in the Wild

@inproceedings{Stadelmann2018DeepLI,
  title={Deep Learning in the Wild},
  author={Thilo Stadelmann and Mohammadreza Amirian and Ismail Arabaci and Marek Arnold and Gilbert François Duivesteijn and Ismail Elezi and Melanie Geiger and Stefan L{\"o}rwald and Benjamin Bruno Meier and Katharina Rombach and Lukas Tuggener},
  booktitle={IAPR International Workshop on Artificial Neural Networks in Pattern Recognition},
  year={2018}
}
Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research… 

Design Patterns for Resource-Constrained Automated Deep-Learning Methods

It is established that very wide fully connected layers learn meaningful features faster and that in very data- and computing-constrained settings, hyperparameter tuning of more traditional machine-learning methods outperforms deep-learning systems.

Combining reinforcement learning with supervised deep learning for neural active scene understanding

A supervised multi-task approach to answer questions about different aspects of a scene such as the relationship between objects, their quantity or the their relative positions to the camera, using reinforcement learning (RL) and convolutional neural network.

Exploiting Contextual Information with Deep Neural Networks

To the best of the knowledge, this thesis is the first to integrate graph-theoretical modules, carefully crafted for the problem of similarity learning and that are designed to consider contextual information, not only outperforming the other models, but also gaining a speed improvement while using a smaller number of parameters.

A Survey of Un-, Weakly-, and Semi-Supervised Learning Methods for Noisy, Missing and Partial Labels in Industrial Vision Applications

This work systematically presents un-, weakly, and semi-supervised approaches from ’A’ like anomaly detection to ’Z’like zero-shot classification to resolve challenges by embracing them to resolve missing labels, noisy labels, and partially labeled data in the prospect of an industrial vision application.

How (Not) to Measure Bias in Face Recognition Networks

This paper investigates a methodology to quantify the amount of bias in a trained Convolutional Neural Network model for FR that is not only intuitively appealing, but also has already been used in the literature to argue for certain debiasing methods.

Automated Machine Learning in Practice: State of the Art and Recent Results

An overview of the state of the art in AutoML is given with a focus on practical applicability in a business context, and recent benchmark results of the most important AutoML algorithms are provided.

Proceedings of the 4th International Workshop on Reading Music Systems

A novel approach using an end-to-end deep neural network model for music composer identification with images of sheet music as inputs is presented, and it can be concluded that the composer identification in sheet music images with deep neural models shows promising results.

Two to trust : AutoML for safe modelling and interpretable deep learning for robustness

Two partners for the trustworthiness tango are presented: Automated machine learning (AutoML), a powerful tool to optimize deep neural network architectures and finetune hyperparameters, which promises to build models in a safer and more comprehensive way; and Interpretability of neural network outputs, which addresses the vital question regarding the reasoning behind model predictions and provides insights to improve robustness against adversarial attacks.

Distance in Latent Space as Novelty Measure

This work proposes to intelligently select samples when constructing data sets in order to best utilize the available labeling budget by using a self-supervised method to construct the latent space.

Is it enough to optimize CNN architectures on ImageNet?

This work investigates and improves ImageNet as a basis for deriving generally effective convolutional neural network architectures that perform well on a diverse set of datasets and application domains and shows how to significantly increase these correlations by utilizing ImageNet subsets restricted to fewer classes.

References

SHOWING 1-10 OF 72 REFERENCES

Transductive Label Augmentation for Improved Deep Network Learning

This paper starts from a small, curated labeled dataset and lets the labels propagate through a larger set of unlabeled data using graph transduction techniques, and shows that by using known game theoretic transductive processes the authors can create larger and accurate enough labeled datasets which use results in better trained neural networks.

Beyond ImageNet: Deep Learning in Industrial Practice

This chapter focuses on convolutional neural networks, which have since the seminal work of Krizhevsky et al. revolutionized image classification and started surpassing human performance on some benchmark data sets and can be successfully applied to other areas and problems with some local structure in the data.

Opening the Black Box of Deep Neural Networks via Information

This work demonstrates the effectiveness of the Information-Plane visualization of DNNs and shows that the training time is dramatically reduced when adding more hidden layers, and the main advantage of the hidden layers is computational.

Going deeper with convolutions

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition

How transferable are features in deep neural networks?

This paper quantifies the generality versus specificity of neurons in each layer of a deep convolutional neural network and reports a few surprising results, including that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Playing Atari with Deep Reinforcement Learning

This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.

Improving the Robustness of Deep Neural Networks via Stability Training

This paper presents a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping.
...