Stochastic Zeroth-order Optimization via Variance Reduction method

@article{Liu2018StochasticZO,
  title={Stochastic Zeroth-order Optimization via Variance Reduction method},
  author={Liu Liu and Minhao Cheng and Cho-Jui Hsieh and Dacheng Tao},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.11811}
}
Derivative-free optimization has become an important technique used in machine learning for optimizing black-box models. To conduct updates without explicitly computing gradient, most current approaches iteratively sample a random search direction from Gaussian distribution and compute the estimated gradient along that direction. However, due to the variance in the search direction, the convergence rates and query complexities of existing methods suffer from a factor of $d$, where $d$ is the… CONTINUE READING
6
Twitter Mentions

References

Publications referenced by this paper.
SHOWING 1-10 OF 27 REFERENCES

Random Gradient-Free Minimization of Convex Functions

  • Foundations of Computational Mathematics
  • 2017
VIEW 12 EXCERPTS
HIGHLY INFLUENTIAL

Simple Black-Box Adversarial Attacks on Deep Neural Networks

  • 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2017
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Random optimization

J Matyas
  • Automation and Remote control, 26(2):246–253
  • 1965
VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL