#### Filter Results:

- Full text PDF available (42)

#### Publication Year

2006

2017

- This year (5)
- Last 5 years (31)
- Last 10 years (42)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, Alexander Shapiro
- SIAM Journal on Optimization
- 2009

In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on… (More)

- Guanghui Lan
- Math. Program.
- 2012

This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of… (More)

- Saeed Ghadimi, Guanghui Lan
- SIAM Journal on Optimization
- 2013

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming… (More)

- Saeed Ghadimi, Guanghui Lan
- SIAM Journal on Optimization
- 2012

In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA… (More)

- Saeed Ghadimi, Guanghui Lan
- Math. Program.
- 2016

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex… (More)

- Saeed Ghadimi, Guanghui Lan, Hongchao Zhang
- Math. Program.
- 2016

This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG)… (More)

- Guanghui Lan, Gail W. DePuy, Gary E. Whitehouse
- European Journal of Operational Research
- 2007

This paper investigates the development of an effective heuristic to solve the set covering problem (SCP) by applying the meta-heuristic Meta-RaPS (Meta-heuristic for Randomized Priority Search). In Meta-RaPS, a feasible solution is generated by introducing random factors into a construction method. Then the feasible solutions can be improved by an… (More)

- Yuyuan Ouyang, Yunmei Chen, Guanghui Lan, Eduardo Pasiliao
- SIAM J. Imaging Sciences
- 2015

We present a novel framework, namely AADMM, for acceleration of linearized alternating direction method of multipliers (ADMM). The basic idea of AADMM is to incorporate a multi-step acceleration scheme into linearized ADMM. We demonstrate that for solving a class of convex composite optimization with linear constraints, the rate of convergence of AADMM is… (More)

- Saeed Ghadimi, Guanghui Lan
- SIAM Journal on Optimization
- 2013

In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal… (More)

In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on… (More)