• Publications
  • Influence
A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers
TLDR
We provide a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling, and show how it can be used to re-derive several existing results. Expand
A Comparison of String Distance Metrics for Name-Matching Tasks
TLDR
Using an open-source, Java toolkit of name-matching methods, we experimentally compare string distance metrics on the task of matching entity names. Expand
High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence
Given i.i.d. observations of a random vector X 2 R p , we study the problem of estimating both its covariance matrix � ∗ , and its inverse covariance or concentration matrix � ∗ = (� ∗ ) −1 . WeExpand
High-dimensional Ising model selection using ℓ1-regularized logistic regression
We consider the problem of estimating the graph associated with a binary Ising Markov random field. We describe a method based on $\ell_1$-regularized logistic regression, in which the neighborhoodExpand
Learning with Noisy Labels
TLDR
We provide two approaches to suitably modify any given surrogate loss function. Expand
Sparse inverse covariance matrix estimation using quadratic approximation
TLDR
We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. Expand
Sparse Additive Models
We present a new class of methods for high dimensional non-parametric regression and classification called sparse additive models. Our methods combine ideas from sparse linear modelling and additiveExpand
A Dirty Model for Multi-task Learning
TLDR
We consider multi-task learning in the setting of multiple linear regression, and where some relevant features could be shared across the tasks. Expand
Latent Variable Models
A powerful approach to probabilistic modelling involves supplementing a set of observed variables with additional latent, or hidden, variables. By defining a joint distribution over visible andExpand
DAGs with NO TEARS: Continuous Optimization for Structure Learning
TLDR
We propose a new approach for score-based learning of DAGs by converting the traditional combinatorial optimization problem (left) into a continuous program (right). Expand
...
1
2
3
4
5
...