• Corpus ID: 238583166

A General Framework for the Disintegration of PAC-Bayesian Bounds

@inproceedings{Viallard2021AGF,
  title={A General Framework for the Disintegration of PAC-Bayesian Bounds},
  author={Paul Viallard and Pascal Germain and Amaury Habrard and Emilie Morvant},
  year={2021}
}
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers. However, when applied to some family of deterministic models such as neural networks, they require a loose and costly derandomization step. As an alternative to this step, we introduce new PAC-Bayesian generalization bounds that have the originality to provide disintegrated bounds, i.e., they give guarantees over one single hypothesis instead of the usual averaged… 
Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound
TLDR
The resulting stochastic majority vote learning algorithm achieves state-of-the-art accuracy and benefits from (non-vacuous) tight generalization bounds, in a series of numerical experiments when compared to competing algorithms which also minimize PACBayes objectives.
Risk Monotonicity in Statistical Learning
TLDR
This paper derives the first consistent and risk-monotonic algorithms for a general statistical learning setting under weak assumptions, consequently answering some questions posed by [53] on how to avoid non- monotonic behavior of risk curves and shows that risk monotonicity need not necessarily come at the price of worse excess risk rates.
Kernel Interpolation as a Bayes Point Machine
TLDR
The paper finds evidence that large margin, finite width neural networks behave like Bayes point machines too, which may help to explain generalisation in neural networks more broadly.