Corpus ID: 88515827

RKL: a general, invariant Bayes solution for Neyman-Scott

@article{Brand2017RKLAG,
  title={RKL: a general, invariant Bayes solution for Neyman-Scott},
  author={M. Brand},
  journal={arXiv: Machine Learning},
  year={2017}
}
  • M. Brand
  • Published 2017
  • Mathematics
  • arXiv: Machine Learning
Neyman-Scott is a classic example of an estimation problem with a partially-consistent posterior, for which standard estimation methods tend to produce inconsistent results. Past attempts to create consistent estimators for Neyman-Scott have led to ad-hoc solutions, to estimators that do not satisfy representation invariance, to restrictions over the choice of prior and more. We present a simple construction for a general-purpose Bayes estimator, invariant to representation, which satisfies… Expand

References

SHOWING 1-10 OF 28 REFERENCES
On Some Bayesian Solutions of the Neyman-Scott Problem
One of the two celebrated examples of Neyman and Scott (1948) is that in a fixed effects one-way analysis of variance model with normal homoscedastic errors, the maximum likelihood estimator of theExpand
Noninformative priors for the two sample normal problem
SummaryThe paper considers two examples in the two sample normal problem and finds noniformative priors which satisfy (i) a criterion of matching asymptotically the posterior distribution function ofExpand
Efficiency of projected score methods in rectangular array asymptotics
The paper considers a rectangular array asymptotic embedding for multistratum data sets, in which both the number of strata and the number of within-stratum replications increase, and at the sameExpand
An invariant form for the prior probability in estimation problems
  • H. Jeffreys
  • Mathematics, Medicine
  • Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences
  • 1946
It is shown that a certain differential form depending on the values of the parameters in a law of chance is invariant for all transformations of the parameters when the law is differentiable withExpand
A Review of Consistency and Convergence of Posterior Distribution
In this article, we review two important issues, namely consistency and convergence of posterior distribution, that arise in Bayesian inference with large samples. Both parametric and non-parametricExpand
A Catalog of Noninformative Priors
TLDR
A catalog of many of the resulting priors is provided and known properties of the priors are list and emphasis is given to reference priors and the Je reys prior although other approaches are also considered. Expand
On Divergences and Informations in Statistics and Information Theory
  • F. Liese, I. Vajda
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 2006
The paper deals with the f-divergences of Csiszar generalizing the discrimination information of Kullback, the total variation distance, the Hellinger divergence, and the Pearson divergence. AllExpand
The Estimation of Distributions and the Minimum Relative Entropy Principle
TLDR
The relationship of EDA to algorithms developed in statistics, artificial intelligence, and statistical physics is explained within a general interdisciplinary framework and it is shown that maximum entropy approximations play a crucial role. Expand
Semilinear High-Dimensional Model for Normalization of Microarray Data
Normalization of microarray data is essential for removing experimental biases and revealing meaningful biological results. Motivated by a problem of normalizing microarray data, a semilinearExpand
Point Estimation Using the Kullback-Leibler Loss Function and MML
TLDR
An argument is presented as to why the SMML and MML estimators are invariant under parameter transformations, and an approximation to SMML called Fairly Strict MML (FSMML) maps regions from the parameter space to point estimates. Expand
...
1
2
3
...