• Corpus ID: 14257979

Active Learning of Hyperparameters: An Expected Cross Entropy Criterion for Active Model Selection

@article{Kulick2014ActiveLO,
  title={Active Learning of Hyperparameters: An Expected Cross Entropy Criterion for Active Model Selection},
  author={Johannes Kulick and Robert Lieck and Marc Toussaint},
  journal={ArXiv},
  year={2014},
  volume={abs/1409.7552}
}
In standard active learning, the learner’s goal is to reduce the predictive uncertainty with as little data as possible. We consider a slightly dierent problem: the learner’s goal is to uncover latent properties of the model|e.g., which features are relevant (\active feature selection"), or the choice of hyper parameters|with as little data as possible. While the two goals are clearly related, we give examples where following the predictive uncertainty objective is suboptimal for uncovering… 

Figures from this paper

A Novel Active Learning Regression Framework for Balancing the Exploration-Exploitation Trade-Off
TLDR
This work develops a novel active learning framework that aims to solve a general class of optimization problems, and applies the proposed framework to the problem of learning the price-demand function, an application that is important in optimal product pricing and dynamic pricing.
Active Structure Discovery for Gaussian Processes
TLDR
A novel information-theoretic approach for active model selection that does not require model retraining to evaluate candidate points, making it more feasible than previous approaches.
Active Model Selection for Positive Unlabeled Time Series Classification
TLDR
This paper focuses on the widely adopted self-training one-nearest-neighbor (ST-1NN) paradigm, and proposes a model selection framework based on active learning (AL), which develops an effective model performance evaluation strategy and three AL sampling strategies.
Bayesian Active Model Selection with an Application to Automated Audiometry
TLDR
A novel information-theoretic approach for active model selection is introduced and shown to be capable of diagnosing the presence or absence of NIHL with drastically fewer samples than existing approaches and enables the diagnosis to be performed in real time.
Active exploration of joint dependency structures
TLDR
A probabilistic model for joint dependency structures is developed which is the basis for active learning and efficient exploration of joint dependency structure exploration with the developed maximum cross-entropy (MaxCE) exploration strategy.
Bayesian adaptive stimulus selection for dissociating models of psychophysical data
TLDR
It is shown that selecting stimuli adaptively could have led to stronger conclusions in model comparison, and that the psi-algorithm is more efficient and more reliable than current methods of stimuli selection for dissociating models.
Machine learning through exploration for perception-driven robotics = Machinelles Lernen in der Perzeptions-basierte Robotik
TLDR
This thesis proposes a robot reinforcement learning algorithm with learned non-parametric models, value-based functions, and policies that can deal with high-dimensional state representations, and investigates multiple approaches that allow a robot to explore its environment autonomously, while trying to minimize the design effort required to deploy such algorithms in different situations.
Automating Active Learning for Gaussian Processes
TLDR
This document summarizes current capabilities, research and operational priorities, and plans for further studies that were established at the 2015 USGS workshop on quantitative hazard assessments of earthquake-triggered landsliding and liquefaction.
Robots Solving Serial Means-Means-End Problems
TLDR
A project is described that models Goffin’s cockatoos behavior and recreates the experiment on a robotic platform and first preliminary results suggest that the serial structure of the problem does not require much planning.
The strategy of traffic congestion management based on case-based reasoning
TLDR
The cases indicate that traffic congestion management can quickly find a solution to traffic congestion problem by calculating the similarity between congestion cases through CBR and prove that this method can improve the accuracy of CBR results and have certain guiding significance for traffic management.

References

SHOWING 1-10 OF 30 REFERENCES
Bayesian Active Learning for Classification and Preference Learning
TLDR
This work proposes an approach that expresses information gain in terms of predictive entropies, and applies this method to the Gaussian Process Classier (GPC), and makes minimal approximations to the full information theoretic objective.
Active Learning
The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose
Active Learning with Model Selection in Linear Regression
TLDR
This paper proposes a new approach called ensemble active learning for solving the problems of active learning and model selection at the same time and demonstrates by numerical experiments that the proposed method compares favorably with alternative approaches such as iteratively performing active learningand model selection in a sequential manner.
Gaussian Processes for Machine Learning
TLDR
The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.
Multimodel Inference
TLDR
Various facets of such multimodel inference are presented here, particularly methods of model averaging, which can be derived as a non-Bayesian result.
Multiple-Instance Active Learning
TLDR
The experiments show that learning from instance labels can significantly improve performance of a basic MI learning algorithm in two multiple-instance domains: content-based image retrieval and text classification.
Employing Em in Pool-based Active Learning for Text Classiication
TLDR
This paper shows how a text classiier's need for labeled training data can be reduced by a combination of active learning and Expectation Maximization on a pool of unlabeled data and presents a metric for better measuring disagreement among committee members.
A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
TLDR
The results indicate that for real-word datasets similar to the authors', the best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
An Exact Algorithm for Maximum Entropy Sampling
TLDR
An upper bound for the entropy is established, based on the eigenvalue interlacing property, and incorporated in a branch-and-bound algorithm for the exact solution of the experimental design problem of selecting a most informative subset, having prespecified size, from a set of correlated random variables.
Active Learning with Statistical Models
TLDR
This work shows how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression.
...
...