Active Learning Helps Pretrained Models Learn the Intended Task

@article{Tamkin2022ActiveLH,
  title={Active Learning Helps Pretrained Models Learn the Intended Task},
  author={Alex Tamkin and Dat Nguyen and Salil Deshpande and Jesse Mu and Noah D. Goodman},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.08491}
}
Models can fail in unpredictable ways during deployment due to task ambiguity, when multiple behaviors are consistent with the provided training data. An example is an object classifier trained on red squares and blue circles: when encountering blue squares, the intended behavior is undefined. We investigate whether pretrained models are better active learners, capable of disambiguating between the possible tasks a user may be trying to specify. Intriguingly, we find that better active learning is… 

Plex: Towards Reliability using Pretrained Large Model Extensions

TLDR
The reliability of models is explored, where a reliable model is defined as one that not only achieves strong predictive performance but also performs well consistently over many decision-making tasks involving uncertainty, robust generalization, and adaptation.

Multi-Domain Active Learning: Literature Review and Comparative Study

TLDR
This work constructs a pipeline of MDAL and presents a comprehensive comparative study of thirty different algorithms, which are established by combining six representative MDL models and commonly used AL strategies, and qualitatively analyze the behaviors of the well-performed strategies and models.

Selective Annotation Makes Language Models Better Few-Shot Learners

TLDR
It is shown that the effectiveness of vote- k is consistent with different language model sizes and domain shifts between training and test data, and will help researchers and practitioners design new natural language tasks and beyond.

References

SHOWING 1-10 OF 73 REFERENCES

Practical Obstacles to Deploying Active Learning

TLDR
It is shown that while AL may provide benefits when used with specific models and for particular domains, the benefits of current approaches do not generalize reliably across models and tasks.

Bayesian Active Learning with Pretrained Language Models

TLDR
BALM; Bayesian Active Learning with pretrained language Models provides substantial data efficiency improvements compared to various combinations of acquisition functions, models and fine-tuning methods proposed in recent AL literature.

Probabilistic Model-Agnostic Meta-Learning

TLDR
This paper proposes a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution that is trained via a variational lower bound, and shows how reasoning about ambiguity can also be used for downstream active learning problems.

Alignment for Advanced Machine Learning Systems

TLDR
This research proposal focuses on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions and designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers.

True Few-Shot Learning with Language Models

TLDR
This work evaluates the few-shot ability of LMs when such held-out examples are unavailable, a setting the authors call true few- shot learning, and suggests that prior work significantly overestimated thetrue few-shots ability ofLMs given the difficulty of few-Shot model selection.

Active Learning Literature Survey

TLDR
This report provides a general introduction to active learning and a survey of the literature, including a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date.

Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering

TLDR
It is shown that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases, and prescriptive recommendations for mitigating the effects of these outliers are made.

Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration

TLDR
This paper proposes a model-agnostic methodology which uses feedback from an oracle to both identify unknown unknowns and to intelligently guide the discovery, and employs a two-phase approach which first organizes the data into multiple partitions based on the feature similarity of instances and the confidence scores assigned by the predictive model.

Cold-start Active Learning through Self-Supervised Language Modeling

TLDR
With BERT, a simple strategy based on the masked language modeling loss that minimizes labeling costs for text classification is developed and reaches higher accuracy within less sampling iterations and computation time.

Beat the Machine: Challenging Workers to Find the Unknown Unknowns

TLDR
A system that, in a game-like setting, asks humans to identify cases that will cause the predictive-model-based system to fail, and incentivizes humans to provide examples that are difficult for the model to handle, by providing a reward proportional to the magnitude of the predictive model's error.
...