Corpus ID: 141501576

AdaNet: A Scalable and Flexible Framework for Automatically Learning Ensembles

@article{Weill2019AdaNetAS,
  title={AdaNet: A Scalable and Flexible Framework for Automatically Learning Ensembles},
  author={Charles Weill and Javier Gonzalvo and Vitaly Kuznetsov and Scott Yang and Scott Yak and Hanna Mazzawi and Eugen Hotaj and Ghassen Jerfel and Vladimir Macko and Ben Adlam and Mehryar Mohri and Corinna Cortes},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.00080}
}
AdaNet is a lightweight TensorFlow-based (Abadi et al., 2015) framework for automatically learning high-quality ensembles with minimal expert intervention. Our framework is inspired by the AdaNet algorithm (Cortes et al., 2017) which learns the structure of a neural network as an ensemble of subnetworks. We designed it to: (1) integrate with the existing TensorFlow ecosystem, (2) offer sensible default search spaces to perform well on novel datasets, (3) present a flexible API to utilize expert… Expand
Techniques for Automated Machine Learning
TLDR
This paper portrays AutoML as a bi-level optimization problem, where one problem is nested within another to search the optimum in the search space, and review the current developments of AutoML in terms of three categories, automated feature engineering (AutoFE), automated model and hyperparameter tuning (AutoMHT), and automated deep learning (AutoDL). Expand
Functional Gradient Boosting for Learning Residual-like Networks with Statistical Guarantees
TLDR
A new functional gradient boosting for learning deep residual-like networks in a layer-wise fashion with its statistical guarantees on multi-class classification tasks and shows that the existence of a learnable function with a large margin on a training dataset significantly improves a generalization bound. Expand
Optimizing ANN Architecture using Mixed-Integer Programming
Over-parameterized networks, where the number of parameters surpass the number of training samples, generalize well on various tasks. However, large networks are computationally expensive in terms ofExpand
Efficient Oct Image Segmentation Using Neural Architecture Search
TLDR
The experimental results demonstrate that the self-adapting NAS-Unet architecture substantially outperformed the competitive human-designed architecture by achieving 95.4% in mean Intersection over Union metric and 78.7% in Dice similarity coefficient. Expand
Grid Search, Random Search, Genetic Algorithm: A Big Comparison for NAS
In this paper, we compare the three most popular algorithms for hyperparameter optimization (Grid Search, Random Search, and Genetic Algorithm) and attempt to use them for neural architecture searchExpand
Human activity recognition with AutoML using smartphone radio data
TLDR
Google's AutoML Tables service helped to train an artificial neural network model using the AdaNet algorithm and has shown an ability to recognize classes with a precision of 81.2% and recall of 78.2%. Expand
Locomotion and Transportation Mode Recognition from GPS and Radio Signals: Summary of SHL Challenge 2021
TLDR
The overall performance based on GPS and radio sensors is lower than the performance achieved by motion sensors in previous challenges (SHL 2018-2020) and a baseline implementation is presented to help understand the contribution of each sensor modality to the recognition task. Expand
In-code citation practices in open research software libraries
TLDR
This study investigates the availability of a systematic method that allows for the linking of references and in-code citations and indicates that only six of the libraries investigated in this study had such a method, although many did not fully implement the method. Expand
Adaptive Partial Scanning Transmission Electron Microscopy with Reinforcement Learning
  • Jeffrey M. Ede
  • Computer Science, Engineering
  • Machine Learning: Science and Technology
  • 2021
TLDR
This work has extended recurrent deterministic policy gradients to train deep LSTMs and differentiable neural computers to adaptively sample scan path segments and shows that this approach outperforms established algorithms based on spiral scans. Expand
Advances in Electron Microscopy with Deep Learning
TLDR
Thesis Structure, Connections, and Partial Scanning Transmission Electron Microscopy with Deep Learning: A Review of the Building Blocks of Deep Learning. Expand
...
1
2
...

References

SHOWING 1-10 OF 21 REFERENCES
TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks
TLDR
To make out of the box models flexible and usable across a wide range of problems, these canned Estimators are parameterized not only over traditional hyperparameters, but also using feature columns, a declarative specification describing how to interpret input data. Expand
Towards Automatically-Tuned Deep Neural Networks
TLDR
Two versions of Auto-Net are presented, which provide automatically-tuned deep neural networks without any human intervention, and empirical results show that ensembling Auto- Net 1.0 with Auto-sklearn can perform better than either approach alone, and that Auto- net 2.0 can perform even better yet. Expand
Deep Boosting
We present a new ensemble learning algorithm, DeepBoost, which can use as base classifiers a hypothesis set containing deep decision trees, or members of other rich or complex families, and succeedExpand
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
TLDR
The results demonstrate that the AdaNet algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches. Expand
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
TLDR
Overall, users find the TensorFlow Graph Visualizer useful for understanding, debugging, and sharing the structures of their models. Expand
Ensemble Methods : Foundations and Algorithms
nsemble methods train multiple learners and then combine them for use. They have become a hot topic in academia since the 1990s, and are enjoying increased attention in industry. This is mainly basedExpand
Born Again Neural Networks
TLDR
This work studies KD from a new perspective: rather than compressing models, students are trained parameterized identically to their teachers, and shows significant advantages from transferring knowledge between DenseNets and ResNets in either direction. Expand
Learning Transferable Architectures for Scalable Image Recognition
TLDR
This paper proposes to search for an architectural building block on a small dataset and then transfer the block to a larger dataset and introduces a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. Expand
Ensemble Methods: Foundations and Algorithms
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks.Expand
Distilling the Knowledge in a Neural Network
TLDR
This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Expand
...
1
2
3
...