• Publications
  • Influence
Learning with Noisy Labels
TLDR
We provide two approaches to suitably modify any given surrogate loss function. Expand
  • 612
  • 80
  • PDF
PAC Subset Selection in Stochastic Multi-armed Bandits
TLDR
We consider the problem of selecting, from among the arms of a stochastic n-armed bandit, a subset of size m of those arms with the highest expected rewards, based on efficiently sampling the arms. Expand
  • 228
  • 50
  • PDF
On Iterative Hard Thresholding Methods for High-dimensional M-Estimation
TLDR
We provide a general analysis framework that enables us to analyze several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in the high dimensional regression setting. Expand
  • 163
  • 39
  • PDF
Composite Objective Mirror Descent
TLDR
We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. Expand
  • 278
  • 35
REGAL: A Regularization based Algorithm for Reinforcement Learning in Weakly Communicating MDPs
TLDR
We provide an algorithm that achieves the optimal regret rate in an unknown Markov Decision Process (MDP). Expand
  • 177
  • 35
  • PDF
On the Consistency of Multiclass Classification Methods
TLDR
Binary classification is a well studied special case of the classification problem. Expand
  • 266
  • 32
  • PDF
Stochastic methods for l1 regularized loss minimization
TLDR
We describe and analyze two stochastic methods for l1 regularized loss minimization problems, such as the Lasso, that outperform state-of-the-art deterministic methods when the size of the problem is large. Expand
  • 371
  • 30
  • PDF
On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization
TLDR
We provide sharp bounds for Rademacher and Gaussian complexities of (constrained) linear classes, which directly lead to a number of generalization bounds. Expand
  • 278
  • 30
  • PDF
Smoothness, Low Noise and Fast Rates
TLDR
We establish an excess risk bound of O(HR2n + √HL*Rn) for ERM with an H-smooth loss function and a hypothesis class with Rademacher complexity Rn, where L* is the best risk achievable by the hypothesis class. Expand
  • 172
  • 30
  • PDF
Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support
TLDR
We clarify the scientific motivation for JITAIs, define their fundamental components, and highlight design principles related to these components. Expand
  • 470
  • 21
  • PDF
...
1
2
3
4
5
...