Statistical inference optimized with respect to the observed sample for single or multiple comparisons

  title={Statistical inference optimized with respect to the observed sample for single or multiple comparisons},
  author={David R. Bickel},
  • D. Bickel
  • Published 4 October 2010
  • Computer Science
  • ArXiv
The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the p-value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such… 

Figures and Tables from this paper

Measuring support for a hypothesis about a random parameter without estimating its unknown prior
Applying that model to proteomics data indicates that support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins, suggesting the applicability of random-parameter models to other situations in which the parameter distribution cannot be reliably estimated.
Model fusion and multiple testing in the likelihood paradigm: shrinkage and evidence supporting a point null hypothesis
ABSTRACT According to the general law of likelihood, the strength of statistical evidence for a hypothesis as opposed to its alternative is the ratio of their likelihoods, each maximized over the
Minimum Description Length Measures of Evidence for Enrichment
Assessment of measures of evidence derived from the two NMLs, two BFs and the p-value for one-sided and two-sided hypothesis comparisons using a gene expression data set from an experiment on a breast cancer cell line found one of the N MLs, the normalized maximum conditional likelihood (NMCL), is supported by the conditionality principle.
Minimum Description Length and Empirical Bayes Methods of Identifying SNPs Associated with Disease
The goal of determining which of hundreds of thousands of SNPs are associated with disease poses one of the most challenging multiple testing problems. Using the empirical Bayes approach, the local
A Likelihood Paradigm for Clinical Trials
Given the prominent role of clinical trials in evidence-based medicine, proper interpretation of clinical data as statistical evidence is not just a philosophical question but also has important
Parametric Estimation of the Local False Discovery Rate for Identifying Genetic Associations
This work adapts a simple parametric mixture model (PMM) and compares this model to the semiparametric mixturemodel (SMM) behind an LFDR estimator that is known to be conservatively biased, and compares the PMM with a parametric nonmixture model (PNM).
Sharpen statistical significance: Evidence thresholds and Bayes factors sharpened into Occam's razor
Occam's razor suggests assigning more prior probability to a hypothesis corresponding to a simpler distribution of data than to a hypothesis with a more complex distribution of data, other things
Pseudo-Likelihood, Explanatory Power, and Bayes’s Theorem [Comment on “A Likelihood Paradigm for Clinical Trials”]
In this engaging article, Zhang and Zhang (2013) (henceforth ZZ) assembled examples from clinical trials to make a compelling case for medical science's need to measure the evidence for composite h...


The proposed method of weighing evidence almost always favors the correct hypothesis under mild regularity conditions, and issues with simultaneous inference and multiplicity are addressed.
A Reference Bayesian Test for Nested Hypotheses and its Relationship to the Schwarz Criterion
Abstract To compute a Bayes factor for testing H 0: ψ = ψ0 in the presence of a nuisance parameter β, priors under the null and alternative hypotheses must be chosen. As in Bayesian estimation, an
On the use of non‐local prior densities in Bayesian hypothesis tests
Summary.  We examine philosophical problems and sampling deficiencies that are associated with current Bayesian hypothesis testing methodology, paying particular attention to objective Bayes
Scales of Evidence for Model Selection: Fisher versus Jeffreys
A general interpretation of Fisher's scale in terms of Bayes factors is given which works fine when checked for the onedimensional Gaussian problem, where standard hypothesis testing is seen to coincide with a Bayesian analysis that assumes stronger (more informative) priors than those used by the BIC.
Estimators of the local false discovery rate designed for small numbers of tests
Corrections of maximum likelihood estimators of the local false discovery rate (LFDR) of histogram-based empirical Bayes methods are introduced and it is found that HBE requires N to be at least 6-12 features to perform as well as the estimators proposed here, with the precise minimum N depending on p0 and dalt.
On the Probability of Observing Misleading Statistical Evidence
Abstract The law of likelihood explains how to interpret statistical data as evidence. Specifically, it gives to the discipline of statistics a precise and objective measure of the strength of
Ancillaries and Conditional Inference
Sufficiency has long been regarded as the primary reduction pro- cedure to simplify a statistical model, and the assessment of the procedure involves an implicit global repeated sampling principle.
Estimating the Null Distribution to Adjust Observed Confidence Levels for Genome‐Scale Screening
In a generic simulation study of genome-scale multiple testing, conditioning the observed confidence level on the estimated null distribution as an approximate ancillary statistic markedly improved conditional inference, indicating that estimation of the null distribution tends to exacerbate the conservative bias that results from modeling heavy-tailed data distributions with the normal family.
Asymptotic Properties of Adaptive Likelihood Weights by Cross-Validation
Many versions of weighted likelihood have been studied in the literature. The weighted likelihood that we are interested in was introduced to embrace formally a variety of statistical procedures that
The Intrinsic Bayes Factor for Model Selection and Prediction
This article introduces a new criterion called the intrinsic Bayes factor, which is fully automatic in the sense of requiring only standard noninformative priors for its computation and yet seems to correspond to very reasonable actual Bayes factors.