• Corpus ID: 209515409

Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization

@article{Yuan2019StochasticRV,
  title={Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization},
  author={Huizhuo Yuan and Xiangru Lian and Ji Liu},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.13515}
}
Stochastic compositional optimization arises in many important machine learning tasks such as value function evaluation in reinforcement learning and portfolio management. The objective function is the composition of two expectations of stochastic functions, and is more challenging to optimize than vanilla stochastic optimization problems. In this paper, we investigate the stochastic compositional optimization in the general smooth non-convex setting. We employ a recently developed idea of… 

Figures and Tables from this paper

Fast Training Method for Stochastic Compositional Optimization Problems

This work proposes novel decentralized stochastic compositional gradient descent methods to efficiently train the largescale stochastically compositional optimization problem and provides the convergence analysis for these methods, which shows that the convergence rate of these methods can achieve linear speedup with respect to the number of devices.

Projection-Free Stochastic Bi-Level Optimization

It is shown that SBFW outperforms the state-of-the-art methods for the problem of matrix completion with denoising, and achieves improvements of up to 82% in terms of the wall-clock time required to achieve a particular level of accuracy.

Projection-Free Algorithm for Stochastic Bi-level Optimization

It is shown that SBFW outperforms the state-of-the-art methods for the problem of matrix completion with denoising, and achieve improvements of up to 82% in terms of the wall-clock time required to achieve a particular level of accuracy.

On the Convergence of Stochastic Compositional Gradient Descent Ascent Method

A novel efficient stochastic compositional gradient descent ascent method for optimizing the compositional minimax problem is developed and the theoretical convergence rate is established, believed to be the first work achieving such a convergence rate for this problem.

On the Convergence of Local Stochastic Compositional Gradient Descent with Momentum

A novel local stochastic compositional gradient descent with momentum method, which facilitates Federated Learning for the stochastics compositional problem and is the first work achieving such favorable sample and communication complexities.

References

SHOWING 1-10 OF 35 REFERENCES

Variance Reduction for Faster Non-Convex Optimization

This work considers the fundamental problem in non-convex optimization of efficiently reaching a stationary point, and proposes a first-order minibatch stochastic method that converges with an $O(1/\varepsilon)$ rate, and is faster than full gradient descent by $\Omega(n^{1/3})$.

Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions

It is proved that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists and any limit point generated by SCGD is a stationary point, for which the convergence rate analysis is provided.

MultiLevel Composite Stochastic Optimization via Nested Variance Reduction

This work presents a normalized proximal approximate gradient (NPAG) method where the approximate gradients are obtained via nested stochastic variance reduction and the dependence of the total sample complexity on the number of composition levels is polynomial, rather than exponential as in previous work.

Finite-sum Composition Optimization via Variance Reduced Gradient Descent

A constant linear convergence rate is proved for strongly convex optimization, which substantially improves the sublinear rate $O(K^{-0.8})$ of the best known algorithm.

Multilevel Stochastic Gradient Methods for Nested Composition Optimization

This paper considers the multi-level compositional optimization problem that involves compositions ofMulti-level component functions and nested expectations over a random path and proposes a class of multi- level stochastic gradient methods that are motivated from the method ofmulti-timescale stochastics approximation.

Improved Sample Complexity for Stochastic Compositional Variance Reduced Gradient

A new stochastic compositional variance-reduced gradient algorithm with the sample complexity of O((m + n)log(1/ε) + 1/ε3) where m + n is the total number of samples and the dependence on m is optimal up to a logarithmic factor.

Stochastic Nested Variance Reduced Gradient Descent for Nonconvex Optimization

This work proposes a new stochastic gradient descent algorithm based on nested variance reduction that improves the best known gradient complexity of SVRG and the bestgradient complexity of SCSG.

Variance Reduced Methods for Non-Convex Composition Optimization

  • L. LiuJi LiuD. Tao
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2022
To significantly improve the query complexity of current approaches, the stochastic composition via variance reduction (SCVR) is devised and an extension to handle the mini-batch cases is proposed, which improve thequery complexity under the optimalmini-batch size.

Accelerated Method for Stochastic Composition Optimization with Nonsmooth Regularization

This paper proposes a new stochastic composition optimization method for composition problem with nonsmooth regularization penalty that significantly improves the state-of-the-art convergence rate from O(T–1/2) to O((n1+n2)2/3T-1).

Non-convex Finite-Sum Optimization Via SCSG Methods

A class of algorithms, as variants of the stochastically controlled stochastic gradient methods (SCSG) methods, for the smooth non-convex finite-sum optimization problem, which demonstrates that SCSG outperforms stochastics gradient methods on training multi-layers neural networks in terms of both training and validation loss.