Learning a Policy for Opportunistic Active Learning

@inproceedings{Padmakumar2018LearningAP,
  title={Learning a Policy for Opportunistic Active Learning},
  author={Aishwarya Padmakumar and Peter Stone and Raymond J. Mooney},
  booktitle={EMNLP},
  year={2018}
}
Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task… 

Figures and Tables from this paper

Dialog Policy Learning for Joint Clarification and Active Learning Queries

TLDR
This work trains a hierarchical dialog policy to jointly perform clarification and active learning in the context of an interactive language-based image retrieval task motivated by an online shopping application, and demonstrates that jointly learning dialog policies for clarification andactive learning is more effective than the use of static dialog policies.

Sampling Approach Matters: Active Learning for Robotic Language Acquisition

TLDR
It is observed that representativeness, along with diversity, is crucial in selecting data samples, and a method for analyzing the complexity of data in this joint problem space is presented.

Improved Models and Queries for Grounded Human-Robot Dialog

TLDR
This work presents completed work on jointly learning a dialog policy that enables a robot to clarify partially understood natural language commands, while simultaneously using the dialogs to improve the underlying semantic parser for future commands.

Reinforced active learning for image segmentation

TLDR
This paper proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems, and finds that its method asks for more labels of under-represented categories compared to the baselines, improving their performance and helping to mitigate class imbalance.

A Reinforcement One-Shot Active Learning Approach for Aircraft Type Recognition

TLDR
An improved active learning approach, which can not only learn to classify samples using small supervision but additionally capture a relatively optimal label query strategy, was developed by employing the reinforcement learning in the process of decision-making.

Dialog as a Vehicle for Lifelong Learning

TLDR
The problem of designing dialog systems that enable lifelong learning as an important challenge problem, in particular for applications involving physically situated robots is presented.

Deep reinforced active learning for multi-class image classification

TLDR
A new active learning framework is presented, based on deep reinforcement learning, to learn an active learning query strategy to label images based on predictions from a convolutional neural network to maximise model performance on a minimal subset from a larger pool of data.

Using Pre-trained Language Model to Enhance Active Learning for Sentence Matching

TLDR
This article provides extra textual criteria with the pre-trained language model to measure instances, including noise, coverage, and diversity, and shows that the proposed active learning approach can be enhanced by thePre- trained language model and obtain better performance.

Correcting Robot Plans with Natural Language Feedback

TLDR
This paper describes how to map from natural language sentences to transformations of cost functions and shows that these transformations enable users to correct goals, update robot motions to accommodate additional user preferences, and recover from planning errors.

Learning to Sample the Most Useful Training Patches from Images

TLDR
This paper presents a data-driven approach called PatchNet that learns to select the most useful patches from an image to construct a new training set instead of manual or random selection and shows that this simple idea automatically selects informative samples out from a large-scale dataset, leading to a surprising 2.35dB generalisation gain in terms of PSNR.

References

SHOWING 1-10 OF 40 REFERENCES

Learning how to Active Learn: A Deep Reinforcement Learning Approach

TLDR
A novel formulation of active learning is introduced by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the activelearning heuristic.

Learning Algorithms for Active Learning

TLDR
A model that learns active learning algorithms via metalearning jointly learns: a data representation, an item selection heuristic, and a prediction function for a distribution of related tasks.

Active Learning Literature Survey

TLDR
This report provides a general introduction to active learning and a survey of the literature, including a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date.

An Analysis of Active Learning Strategies for Sequence Labeling Tasks

TLDR
This paper surveys previously used query selection strategies for sequence models, and proposes several novel algorithms to address their shortcomings, and conducts a large-scale empirical comparison.

Active One-shot Learning

TLDR
A recurrent neural network based action-value function is presented, and its ability to learn how and when to request labels is demonstrated, and the model can achieve a higher prediction accuracy than a similar model on a purely supervised task, or trade prediction accuracy for fewer label requests.

Toward Optimal Active Learning through Sampling Estimation of Error Reduction

This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These

Active Learning for Multi-Label Image Annotation

TLDR
This paper reports results on a learning system for labeling personal image collections that is both active and multi-label, and aims to reduce the overall number of images that are presented to the user for labeling.

Active Learning for Natural Language Parsing and Information Extraction

TLDR
It is shown that active learning can signicantly reduce the number of annotated examples required to achieve a given level of performance for these complex tasks: semantic parsing and information extraction.

Mapping Instructions and Visual Observations to Actions with Reinforcement Learning

TLDR
This work learns a single model to jointly reason about linguistic and visual input in a contextual bandit setting to train a neural network agent and shows significant improvements over supervised learning and common reinforcement learning variants.

Evidence-based uncertainty sampling for active learning

TLDR
An evidence-based framework is presented that can uncover the reasons for why a model is uncertain on a given instance and discusses two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertainties because it does not have enough evidence for either class.