Estimation of the Number of “True” Null Hypotheses in Multivariate Analysis of Neuroimaging Data

  title={Estimation of the Number of “True” Null Hypotheses in Multivariate Analysis of Neuroimaging Data},
  author={Federico E. Turkheimer and C. B. Smith and Kathleen C. Schmidt},
The repeated testing of a null univariate hypothesis in each of many sites (either regions of interest or voxels) is a common approach to the statistical analysis of brain functional images. Procedures, such as the Bonferroni, are available to maintain the Type I error of the set of tests at a specified level. An initial assumption of these methods is a "global null hypothesis," i.e., the statistics computed on each site are assumed to be generated by null distributions. This framework may be… 

Figures and Tables from this paper

On the logic of hypothesis testing in functional imaging

This article investigates the logical bases of current statistical approaches in functional imaging and probes their suitability to inductive inference in neuroscience by recasting the multiple comparison problem into a multivariate Bayesian formulation.

Comparisons of estimators of the number of true null hypotheses and adaptive FDR procedures in multiplicity testing

Many exploratory studies such as microarray experiments require the simultaneous comparison of hundreds or thousands of genes. It is common to see that most genes in many microarray experiments are

Estimating the proportion of true null hypotheses using the pattern of observed p-values

Several data-driven methods for estimating π0 are proposed by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics and it is found that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.

Estimating the proportion of true null hypotheses with application in microarray data

Abstract A new formulation for the proportion of true null hypotheses based on the sum of all p-values and the average of expected p-values under the false null hypotheses has been proposed in the

False Discovery Rate Control in Magnetic Resonance Imaging Studies via Markov Random Fields

Novel methods that incorporate spatial dependencies into the process of controlling FDR through the use of Markov random fields are presented and show that they have desirable statistical properties with regards to FDR control and are able to outperform noncontexual methods in simulations of dependent hypothesis scenarios.

Estimating the number of true null hypotheses in multiple hypothesis testing

The overall Type I error computed based on the traditional means may be inflated if many hypotheses are compared simultaneously. The family-wise error rate (FWER) and false discovery rate (FDR) are

Comparing methods of analyzing fMRI statistical parametric maps

Parametric Mixture Models for Estimating the Proportion of True Null Hypotheses and Adaptive Control of FDR

Estimation of the proportion or the number of true null hypotheses is an important problem in multiple testing, especially when the number of hypotheses is large. Wu, Guan and Zhao [Biometrics 62



Nonparametric Analysis of Statistic Images from Functional Mapping Experiments

This work presents a nonparametric approach to significance testing for statistic images from activation studies, replacing formal assumptions with a computationally expensive approach that extends easily to other paradigms, permittingnonparametric analysis of most functional mapping experiments.

Comparing Functional (PET) Images: The Assessment of Significant Change

This report describes an approach that may partially resolve the uncertainty in assessing the significance of statistical parametric maps and models the SPM as a stationary stochastic process.

Statistical “Discoveries” and Effect-Size Estimation

Abstract Current methods of statistical inference are correct but incomplete. Small probability (α) of wrong null-hypothesis rejections can be misunderstood. Definitive rejections of null hypotheses,

Step-up multiple testing of parameters with unequally correlated estimates.

It is shown how the step-up multiple test procedure can be extended to include unequally correlated parameter estimates, for example, in experiments involving comparisons among treatment groups with unequal sample sizes.


A new method for the analysis of multiple studies measured with emission tomography by performing the statistical modeling step in wavelet space, which allows the direct estimation of the error for each wavelet coefficient.

Controlling the false discovery rate: a practical and powerful approach to multiple testing

SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to

Plots of P-values to evaluate many tests simultaneously

SUMMARY When a large number of tests are made, possibly on the same data, it is proposed to base a simultaneous evaluation of all the tests on a plot of cumulative P-values using the observed

Statistical Modeling of Positron Emission Tomography Images in Wavelet Space

  • F. TurkheimerM. Brett V. Cunningham
  • Mathematics
    Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism
  • 2000
A new method is introduced for the analysis of multiple studies measured with emission tomography that allows the direct estimation of the error for each wavelet coefficient and therefore obtains estimates of the effects of interest under the specified statistical risk.

Methods of correcting for multiple testing: operating characteristics.

The operating characteristics of 17 methods for correcting p-values for multiple testing on synthetic data with known statistical properties are examined, finding that a uniformly best method of those examined does not exist.

A Three-Dimensional Statistical Analysis for CBF Activation Studies in Human Brain

A simple method for determining an approximate p value for the global maximum based on the theory of Gaussian random fields is described, which focuses on the Euler characteristic of the set of voxels with a value larger than a given threshold.