#### Filter Results:

- Full text PDF available (52)

#### Publication Year

1985

2017

- This year (1)
- Last 5 years (5)
- Last 10 years (20)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- James O. Berger
- 1985

Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper we… (More)

- James O. Berger
- 2003

Ronald Fisher advocated testing using p-values, Harold Jeffreys proposed use of objective posterior probabilities of hypotheses and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. Most troubling for statistics and science is that the three approaches can lead to quite different practical… (More)

P values are the most commonly used tool to measure evidence against a hypothesis or hypothesized model. Unfortunately, they are often incorrectly viewed as an error probability for rejection of the hypothesis or, even worse, as the posterior probability that the hypothesis is true. The fact that these interpretations can be completely misleading when… (More)

We review simulation based methods in optimal design. Expected utility maximization, i.e., optimal design, is concerned with maximizing an integral expression representing expected utility with respect to some design parameter. Except in special cases neither the maximization nor the integration can be solved analytically and approximations and/or… (More)

- Maria J. Bayarri, James O. Berger, +5 authors Jian Tu
- Technometrics
- 2007

In this paper, we present a framework that enables computer model evaluation oriented towards answering the question: Does the computer model adequately represent reality? The proposed validation framework is a six-step procedure based upon a mix of Bayesian statistical methodology and likelihood methodology. The methodology is particularly suited to… (More)

Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fight has become considerably muted, with the recognition… (More)

In this paper reference priors are derived for three cases where partial information is available If a subjective conditional prior is given two reasonable methods are pro posed for nding the marginal reference prior If instead a subjective marginal prior is available a method for de ning the conditional reference prior is proposed A su cient condition is… (More)

Reference analysis produces objective Bayesian inference, in the sense that inferential statements depend only on the assumed model and the available data, and the prior distribution used to make an inference is least informative in a certain information-theoretic sense. Reference priors have been rigorously defined in specific contexts and heuristically… (More)

This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. The first goal of the paper is to clarify when, and how, multiplicity correction is automatic in Bayesian analysis, and contrast this multiplicity correction with the Bayesian Ockham’s-razor effect. Secondly, we contrast empirical-Bayes… (More)