• Publications
  • Influence
An Interior-Point Method for Large-Scale $\ell_1$-Regularized Least Squares
TLDR
A specialized interior-point method for solving large-scale -regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction and can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. Expand
1 Trend Filtering
TLDR
This paper proposes a variation on Hodrick-Prescott (H-P) filtering, a widely used method for trend estimation that substitutes a sum of absolute values for the sum of squares used in H-P filtering to penalize variations in the estimated trend. Expand
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
TLDR
This paper describes an efficient interior-point method for solving large-scale l1-regularized logistic regression problems, and shows how a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently. Expand
Multi-Period Trading via Convex Optimization
TLDR
A framework for single-period optimization, where the trades in each period are found by solving a convex optimization problem that trades off expected return, risk, transaction cost and holding cost such as the borrowing cost for shorting assets. Expand
An Efficient Method for Compressed Sensing
TLDR
A specialized interior-point method for solving CS problems that uses a preconditioned conjugate gradient method to compute the search step and can efficiently solve large CS problems, by exploiting fast algorithms for the signal transforms used. Expand
Trend Filtering ∗
The problem of estimating underlying trends in time series data arises in a variety of disciplines. In this paper we propose a variation on Hodrick–Prescott (H-P) filtering, a widely used method forExpand
A Method for Large-Scale l1-Regularized Logistic Regression
TLDR
Numerical experiments show that the efficient interior-point method described here outperforms standard methods for solving convex optimization problems as well as other methods specifically designed for l1- regularized LRPs. Expand
Learning the kernel via convex optimization
TLDR
It is shown that, in a wide variety of kernel-based learning algorithms, the kernel learning problem can be formulated as a convex optimization problem which interior-point methods can solve globally and efficiently. Expand
An Efficient Method for Large-Scale l1-Regularized Convex Loss Minimization
TLDR
An efficient interior-point method for solving large-scale lscr1-regularized convex loss minimization problems that uses a preconditioned conjugate gradient method to compute the search step and can solve very large problems. Expand
...
1
2
...