On minimum information prior distributions

@article{Akaike1983OnMI,
  title={On minimum information prior distributions},
  author={Hirotugu Akaike},
  journal={Annals of the Institute of Statistical Mathematics},
  year={1983},
  volume={35},
  pages={139-149}
}
  • H. Akaike
  • Published 1 December 1983
  • Mathematics
  • Annals of the Institute of Statistical Mathematics
SummaryThe formulation of the concept of non-informative prior distribution over a finite number of possibilities is considered and the minimum information prior distribution is defined as the prior distribution that adds minimum expected amount of information to the posterior distribution. Numerical examples show that the definition leads to nontrivial results. An information inequality is established to assure the validity of numerical results. The relation of the present work to other works… 
Sensitivity of a Bayesian Analysis to the Prior Distribution
TLDR
This work describes the sensitivity of a posterior distribution (or posterior mean) to prior and illustrates the results on two distinct problems: a) determining least-informative (vague) priors and b) estimating statistical quantiles for a problem in analyzing projectile accuracy.
Least-informative Bayesian prior distributions for finite samples based on information theory
A procedure, based on Shannon information theory, for producing least-informative prior distributions for Bayesian estimation and identification is presented. This approach relies on constructing an
Deriving Reference Decisions
To solve a statistical decision problem from a Bayesian viewpoint, the decision maker must specify a probability distribution on the parameter space, his prior distribution. In order to analyze the
Bayesian predictive densities based on latent information priors
Reference Analysis
Composite modeling of transfer functions
TLDR
In this contribution a method is developed that combines the frequency function estimates and the estimation errors from all possible structures into a joint estimate and estimation error and bypasses the structure selection problem.
Prediction and Entropy
The emergence of the magic number 2 in recent statistical literature is explained by adopting the predictive point of view of statistics with entropy as the basic criterion of the goodness of a
Statistical methods for automated drug susceptibility testing: Bayesian minimum inhibitory concentration prediction from growth curves
TLDR
A novel probabilistic approach that accurately estimates MICs based on apanel of multiple curves reflecting features of bacterial growth, incorporated into the Becton–Dickinson PHOENIX automated susceptibility system that rapidly and accurately classifies the resistance of a large number ofmicroorganisms in clinical samples.

References

SHOWING 1-5 OF 5 REFERENCES
An invariant form for the prior probability in estimation problems
  • H. Jeffreys
  • Mathematics
    Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences
  • 1946
It is shown that a certain differential form depending on the values of the parameters in a law of chance is invariant for all transformations of the parameters when the law is differentiable with