• Publications
  • Influence
A bound on the label complexity of agnostic active learning
We study the label complexity of pool-based active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A2 algorithm proposed byExpand
  • 240
  • 38
  • PDF
Discrete Temporal Models of Social Networks
We propose a family of statistical models for social network evolution over time, which represents an extension of Exponential Random Graph Models (ERGMs). Many of the methods for ERGMs are readilyExpand
  • 173
  • 17
  • PDF
Recovering temporally rewiring networks: a model-based approach
A plausible representation of relational information among entities in dynamic systems such as a living cell or a social community is a stochastic network which is topologically rewiring andExpand
  • 121
  • 12
  • PDF
Theory of Disagreement-Based Active Learning
  • Steve Hanneke
  • Computer Science
  • Found. Trends Mach. Learn.
  • 30 May 2014
Active learning is a protocol for supervised machine learning, in which a learning algorithm sequentially requests the labels of selected data points from a large pool of unlabeled data. ThisExpand
  • 119
  • 12
  • PDF
Theoretical foundations of active learning
I study the informational complexity of active learning in a statistical learning theory framework. Specifically, I derive bounds on the rates of convergence achievable by active learning, underExpand
  • 74
  • 11
  • PDF
Rates of convergence in active learning
We study the rates of convergence in generalization error achievable by active learning under various types of label noise. Additionally, we study the general problem of model selection for activeExpand
  • 110
  • 9
  • PDF
Teaching Dimension and the Complexity of Active Learning
We study the label complexity of pool-based active learning in the PAC model with noise. Taking inspiration from extant literature on Exact learning with membership queries, we derive upper and lowerExpand
  • 74
  • 7
Adaptive Rates of Convergence in Active Learning
We study the rates of convergence in classification error achievable by active learning in the presence of label noise. Additionally, we study the more general problem of active learning with aExpand
  • 37
  • 7
  • PDF
The true sample complexity of active learning
We describe and explore a new perspective on the sample complexity of active learning. In many situations where it was generally believed that active learning does not help, we show that activeExpand
  • 153
  • 5
  • PDF