Paul Grigas

Learn More
We present new results for the Frank-Wolfe method (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on(More)
Boosting methods are highly popular and effective supervised learning methods which combine weak learners into a single accurate model with good statistical performance. In this paper, we analyze two well-known boosting methods, AdaBoost and Incremental Forward Stagewise Regression (FS ε), by establishing their precise connections to the Mirror Descent(More)
In this paper we analyze boosting algorithms [15, 21, 24] in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS ε) and least squares boosting (LS-Boost(ε)), can be viewed as subgradient(More)
  • 1