Corpus ID: 4228302

Generalization Error Bounds for Optimization Algorithms via Stability

@inproceedings{Meng2017GeneralizationEB,
  title={Generalization Error Bounds for Optimization Algorithms via Stability},
  author={Qi Meng and Y. Wang and Wei Chen and Taifeng Wang and Z. Ma and Tie-Yan Liu},
  booktitle={AAAI},
  year={2017}
}
Many machine learning tasks can be formulated as Regularized Empirical Risk Minimization (R-ERM), and solved by optimization algorithms such as gradient descent (GD), stochastic gradient descent (SGD), and stochastic variance reduction (SVRG). Conventional analysis on these optimization algorithms focuses on their convergence rates during the training process, however, people in the machine learning community may care more about the generalization performance of the learned model on unseen test… Expand
Exploring the Generalization Performance of Neural Networks via Diversity
Stable and Fair Classification
Why ResNet Works? Residuals Generalize
A Neural Network Model of Incubation through Mind Wandering

References

SHOWING 1-10 OF 39 REFERENCES
Train faster, generalize better: Stability of stochastic gradient descent
Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization
On the Generalization Ability of Online Strongly Convex Programming Algorithms
On the generalization ability of on-line learning algorithms
Learnability, Stability and Uniform Convergence
Competing with the Empirical Risk Minimizer in a Single Pass
Stability and Generalization
Accelerating Stochastic Gradient Descent using Predictive Variance Reduction
Stochastic Variance Reduction for Nonconvex Optimization
...
1
2
3
4
...