Gillian M. Chin

Learn More
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates(More)
This paper describes how to incorporate sampled curvature information in a NewtonCG method and in a limited memory quasi-Newton method for statistical learning. The motivation for this work stems from supervised machine learning applications involving a very large number of training points. We follow a batch approach, also known in the stochastic(More)
This paper is concerned with the minimization of an objective that is the sum of a convex function f and an `1 regularization term. Our interest is in methods that incorporate second-order information about the function f to accelerate convergence. We describe a semi-smooth Newton framework that can be used to generate a variety of second-order methods,(More)
This paper describes how to incorporate stochastic curvature information in a NewtonCG method and in a limited memory quasi-Newton method for large scale optimization. The motivation for this work stems from statistical learning and stochastic optimization applications in which the objective function is the sum of a very large number of loss terms, and can(More)
A variety of first-order methods have recently been proposed for solving matrix optimization problems arising in machine learning. The premise for utilizing such algorithms is that second order information is too expensive to employ, and so simple first-order iterations are likely to be optimal. In this paper, we argue that second-order information is in(More)
  • 1