A stochastic compositional gradient method using Markov samples

Abstract

Consider the convex optimization problem min<inf>x</inf> &#x0192; (g(x)) where both &#x0192; and g are unknown but can be estimated through sampling. We consider the stochastic compositional gradient descent method (SCGD) that updates based on random function and subgradient evaluations, which are generated by a conditional sampling oracle. We focus on the case where samples are corrupted with Markov noise. Under certain diminishing stepsize assumptions, we prove that the iterate of SCGD converges almost surely to an optimal solution if such a solution exists. Under specific constant stepsize assumptions, we obtain finite-sample error bounds for the averaged iterates of the algorithm. We illustrate an application to online value evaluation in dynamic programming.

Cite this paper

@article{Wang2016ASC, title={A stochastic compositional gradient method using Markov samples}, author={Mengdi Wang and Ji Liu}, journal={2016 Winter Simulation Conference (WSC)}, year={2016}, pages={702-713} }