Toward Optimal Active Learning through Sampling Estimation of Error Reduction

Abstract

This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These other methods are popular because for many learning models, closed form calculation of the expected future error is intractable. Our approach is made feasible by taking a sampling approach to estimating the expected reduction in error due to the labeling of a query. In experimental results on two real-world data sets we reach high accuracy very quickly, sometimes with four times fewer labeled examples than competing methods.

Extracted Key Phrases

4 Figures and Tables

050'02'04'06'08'10'12'14'16
Citations per Year

750 Citations

Semantic Scholar estimates that this publication has 750 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Roy2001TowardOA, title={Toward Optimal Active Learning through Sampling Estimation of Error Reduction}, author={Nicholas Roy and Andrew McCallum}, booktitle={ICML}, year={2001} }