A PAC-Bayes Bound for Tailored Density Estimation

@inproceedings{Higgs2010APB,
  title={A PAC-Bayes Bound for Tailored Density Estimation},
  author={Matthew Higgs and John Shawe-Taylor},
  booktitle={ALT},
  year={2010}
}
In this paper we construct a general method for reporting on the accuracy of density estimation. Using variational methods from statistical learning theory we derive a PAC, algorithm-dependent bound on the distance between the data generating distribution and a learned approximation. The distance measure takes the role of a loss function that can be tailored to the learning problem, enabling us to control discrepancies on tasks relevant to subsequent inference. We apply the bound to an… 
A Refined MCMC Sampling from RKHS for PAC-Bayes Bound Calculation
TLDR
By formulating the concept space as Reproducing Kernel Hilbert Space (RKHS) using the kernel method, a refined Markov Chain Monte Carlo (MCMC) sampling algorithm is proposed by incorporating feedback information of the simulated model over training examples for simulating posterior distributions of the conceptspace.
On the PAC-Bayes Bound Calculation based on Reproducing Kernel Hilbert Space
TLDR
The concept space is formulated as Reproducing Kernel Hilbert Space (RKHS) using the kernel method and it is demonstrated that the RKHS can be constructed using the linear combination of kernels, and the support vectors and their corresponding weights of SVM outputs describe the complexity of concept space.
Optimization of MCMC sampling algorithm for the calculation of PAC-Bayes bound
TLDR
This paper stores a portion of the sampling data and calculates its variance, after which the variance minimization method is proposed to investigate the support vectors, and compares the PAC-Bayes bounds.
User-friendly introduction to PAC-Bayes bounds
TLDR
This paper describes a simplified version of the localization technique of [34, 36] that was missed by the community, and later rediscovered as “mutual information bounds” and is an attempt to provide an elementary introduction to PAC-Bayes theory.
PAC-Bayesian Contrastive Unsupervised Representation Learning
TLDR
This work presents PAC-Bayesian generalisation bounds for CURL, which are then used to derive a new representation learning algorithm, and demonstrates that the algorithm achieves competitive accuracy, and yields non-vacuousgeneralisation bounds.
Risk bounds for the majority vote: from a PAC-Bayesian analysis to a learning algorithm
TLDR
An extensive analysis of the behavior of majority votes in binary classification is proposed and a risk bound for majority votes, called the C-bound, is introduced that takes into account the average quality of the voters and their average disagreement.
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits
TLDR
The new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem are combined and a new analysis tool to reinforcement learning and many other fields, where martingales and limited feedback are encountered is introduced.
Learning under Dependence for Aggregation of Estimators andClassification, with Applications to DNA Analysis
This thesis aims at a systematic introduction to a weak dependence condition, provided by Doukhan and Louhichi (1999), which is more general than the clas- sical frameworks of mixing or associated
PAC-Bayes analysis of multi-view learning
L G ] 1 0 O ct 2 01 9 PAC-Bayesian Contrastive Unsupervised Representation Learning
TLDR
This work presents PAC-Bayesian generalisation bounds for CURL, which are then used to derive a new representation learning algorithm, and demonstrates that the algorithm achieves competitive accuracy, and yields generalised bounds with non-vacuous values.
...
...

References

SHOWING 1-10 OF 25 REFERENCES
PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification
  • M. Seeger
  • Computer Science
    J. Mach. Learn. Res.
  • 2002
TLDR
By applying the PAC-Bayesian theorem of McAllester (1999a), this paper proves distribution-free generalisation error bounds for a wide range of approximate Bayesian GP classification techniques, giving a strong learning-theoretical justification for the use of these techniques.
Distribution-Dependent PAC-Bayes Priors
TLDR
The idea that the PAC-Bayes prior can be informed by the data-generating distribution is developed, sharp bounds for an existing framework are proved, and insights into function class complexity are developed in this model and means of controlling it with new algorithms are suggested.
A PAC-Bayesian approach to adaptive classification
TLDR
This is meant to be a self-contained presentation of adaptive classification seen from the PAC-Bayesian point of view, where the main improvements brought here are more localized bounds and the use of exchangeable prior distributions.
PAC-Bayesian bounds for randomized empirical risk minimizers
TLDR
The aim of this paper is to generalize the PAC-Bayesian theorems proved by Catoni in the classification setting to more general problems of statistical inference, and to bound the risk of very general estimation procedures.
A BETTER VARIANCE CONTROL FOR PAC-BAYESIAN CLASSIFICATION
The common method to understand and improve classification rules is to prove bounds on the generalization error. Here we provide localized data-based PAC-bounds for the difference between the risk of
PAC-BAYESIAN SUPERVISED CLASSIFICATION: The Thermodynamics of Statistical Learning
TLDR
An alternative selection scheme based on relative bounds between estimators is described and study, and a two step localization technique which can handle the selection of a parametric model from a family of those is presented.
Information-theoretic upper and lower bounds for statistical estimation
  • Tong Zhang
  • Mathematics, Computer Science
    IEEE Transactions on Information Theory
  • 2006
TLDR
This paper establishes upper and lower bounds for some statistical estimation problems through concise information-theoretic arguments based on a simple yet general inequality, which naturally leads to a general randomized estimation method, for which performance upper bounds can be obtained.
Chromatic PAC-Bayes Bounds for Non-IID Data
TLDR
This work proposes the first - to the best of the authors' knowledge - Pac-Bayes generalization bounds for classifiers trained on data exhibiting interdependencies and shows how the results can be used to derive bounds for ranking statistics and classifierstrained on data distributed according to a stationary {\ss}-mixing process.
A PAC-Bayesian Approach to Unsupervised Learning with Application to Co-clustering Analysis
TLDR
This paper identifies two possible high-level tasks in matrix data analysis: discriminative prediction of the missing entries and estimation of the joint probability distribution of row and column variables and derives PAC-Bayesian generalization bounds for the expected out-of-sample performance of co-clustering-based solutions.
...
...