• Corpus ID: 204401684

FastEstimator: A Deep Learning Library for Fast Prototyping and Productization

@article{Dong2019FastEstimatorAD,
  title={FastEstimator: A Deep Learning Library for Fast Prototyping and Productization},
  author={Xiaomeng Dong and Junpyo Hong and Hsi-Ming Chang and Michael Potter and Aritra Chowdhury and Purujit Bahl and Vivek Soni and Yun-chan Tsai and Rajesh Tamada and Gaurav Kumar and Caroline Favart and V. R. Saripalli and Gopal Avinash},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.04875}
}
As the complexity of state-of-the-art deep learning models increases by the month, implementation, interpretation, and traceability become ever-more-burdensome challenges for AI practitioners around the world. Several AI frameworks have risen in an effort to stem this tide, but the steady advance of the field has begun to test the bounds of their flexibility, expressiveness, and ease of use. To address these concerns, we introduce a radically flexible high-level open source deep learning… 
1 Citations
To Raise or Not To Raise: The Autonomous Learning Rate Question
TLDR
This work proposes a new answer to the great learning rate question: the Autonomous Learning Rate Controller, which can be controlled by either the user or the system itself.

References

SHOWING 1-10 OF 14 REFERENCES
Ludwig: a type-based declarative deep learning toolbox
TLDR
Ludwig is a flexible, extensible and easy to use toolbox which allows users to train deep learning models and use them for obtaining predictions without writing code, and introduces a general modularized deep learning architecture called Encoder-Combiner-Decoder that can be instantiated to perform a vast amount of machine learning tasks.
Automatic differentiation in PyTorch
TLDR
An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Caffe: Convolutional Architecture for Fast Feature Embedding
TLDR
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
TLDR
The API design and the system implementation of MXNet are described, and it is explained how embedding of both symbolic expression and tensor operation is handled in a unified fashion.
CNTK: Microsoft's Open-Source Deep-Learning Toolkit
TLDR
This tutorial will introduce the Computational Network Toolkit, or CNTK, Microsoft's cutting-edge open-source deep-learning toolkit for Windows and Linux, and show how typical uses looks like for relevant tasks like image recognition, sequence-to-sequence modeling, and speech recognition.
Enhanced Deep Residual Networks for Single Image Super-Resolution
TLDR
This paper develops an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods, and proposes a new multi-scale deepsuper-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
TLDR
This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
...
1
2
...