• Corpus ID: 10206875

Inferring a Gaussian distribution

@inproceedings{Minka2001InferringAG,
  title={Inferring a Gaussian distribution},
  author={Thomas P. Minka},
  year={2001}
}
A common question in statistical modeling is “which out of a continuum of models are likely to have generated this data?” For the Gaussian class of models, this question can be answered completely and exactly. This paper derives the exact posterior distribution over the mean and variance of the generating distribution, i.e. p(m, V|X), as well as the marginals p(m|X) and p(V|X). It also derives p(X|Gaussian), the probability that the data came from any Gaussian whatsoever. From this we can get… 

Figures from this paper

How to use KL-divergence to construct conjugate priors, with well-defined non-informative limits, for the multivariate Gaussian
TLDR
It is shown how to use the scaled KL-divergence between multivariate Gaussians as an energy function to construct Wishart and normal-Wishart conjugate priors, and the scale factor can be taken down to the limit at zero, to form noninformative priors that do not violate the restrictions on the Wishart shape parameter.
Approximate Variational Inference For Mixture Models
Learning truths behind real, relevant data is faced with uncertainty. A probabilistic view on unsupervised learning considers this uncertainty in its learning objectives through probability
Dynamic bayesian networks: representation, inference and learning
TLDR
This thesis will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in Dbns, and how to learn DBN models from sequential data.
Discriminative, generative and imitative learning
TLDR
It is demonstrated that imitative learning can be adequately addressed as a discriminative prediction task which outperforms the usual generative approach and is applied with a generative perceptual system to synthesize a real-time agent that learns to engage in social interactive behavior.
Machine learning - a probabilistic perspective
  • K. Murphy
  • Computer Science
    Adaptive computation and machine learning series
  • 2012
TLDR
This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
The Rational Basis of Representativeness
Representativeness is a central explanatory construct in cognitive science but suffers from the lack of a principled theoretical account. Here we present a formal definition of one sense of
Maximum Entropy Discrimination
TLDR
A general framework for discriminative estimation based on the maximum entropy principle and its extensions is presented and preliminary experimental results are indicative of the potential in these techniques.
The Rational Basisof Representativeness
Representativeness is a central explanatory construct in cognitive science but suffers from the lack of a principled theoretical account. Here we present a formal definition of one sense of
From mere coincidences to meaningful discoveries q
People’s reactions to coincidences are often cited as an illustration of the irrationality of human reasoning about chance. We argue that coincidences may be better understood in terms of rational
Classification via Minimum Incremental Coding Length
TLDR
A simple new criterion for classification, based on principles from lossy data compression, and its kernel and local versions perform competitively on synthetic examples, as well as on real imagery data such as handwritten digits and face images.
...
1
2
3
4
...

References

SHOWING 1-4 OF 4 REFERENCES
Bayesian inference in statistical analysis
TLDR
This chapter discusses Bayesian Assessment of Assumptions, which investigates the effect of non-Normality on Inferences about a Population Mean with Generalizations in the context of a Bayesian inference model.
Developments in Probabilistic Modelling with Neural Networks - Ensemble Learning
  • D. Mackay
  • Computer Science
    SNN Symposium on Neural Networks
  • 1995
TLDR
This paper presents a framework for statistical inference in which an ensemble of parameter vectors is optimized rather than a single parameter vector and approximates the posterior probability distribution of the parameters.
Theory of Probability
Box and George C . Tiao . Bayesian Inference in Statistical Analysis
  • 1973