False discovery rate for functional data

  title={False discovery rate for functional data},
  author={Niels Lundtorp Olsen and Alessia Pini and Simone Vantini},
Since Benjamini and Hochberg introduced false discovery rate (FDR) in their seminal paper, this has become a very popular approach to the multiple comparisons problem. An increasingly popular topic within functional data analysis is local inference, i.e. the continuous statistical testing of a null hypothesis along the domain. The principal issue in this topic is the infinite amount of tested hypotheses, which can be seen as an extreme case of the multiple comparisons problem. In this paper, we… 
3 Citations

False discovery rate envelopes

The aim of this paper is to develop, based on resampling principles, a graphical envelope that controls FDR and detects the outcomes of all individual hypotheses by a simple rule: the hypothesis is rejected if and only if the empirical test statistic is outside of the envelope.

Fast and fair simultaneous confidence bands for functional parameters

This work represents a major leap forward in this area by presenting a new methodology for constructing simultaneous confidence bands for functional parameter estimates by integrating and extending tools from Random Field Theory.

Simultaneous inference for functional data in sports biomechanics

IWT, SPM and SnPM appear to have relatively inconsequential differences in terms of domain identification sensitivity, except in cases of extreme signal/noise characteristics, where IWT appears to be superior at identifying a greater portion of the true signal.

The positive false discovery rate: a Bayesian interpretation and the q-value

This work introduces a modified version of the FDR called the “positive false discovery rate” (pFDR), which can be written as a Bayesian posterior probability and can be connected to classification theory.

Testing over a continuum of null hypotheses with False Discovery Rate control

We consider statistical hypothesis testing simultaneously over a fairly general, possibly uncountably infinite, set of null hypotheses, under the assumption that a suitable single test (and


Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR

Inequalities for the false discovery rate (FDR) under dependence

Inequalities are key tools to prove FDR control of a multiple test. The present paper studies upper and lower bounds for the FDR under various dependence structures of p-values, namely independence,

Controlling the false discovery rate: a practical and powerful approach to multiple testing

SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to

Interval-wise testing for functional data

It is proved that the unadjusted (adjusted) p-value function point-wise (interval-wise) controls the probability of type-I error and it is point- wise (interValuable) consistent.

False Discovery Control for Random Fields

This article extends false discovery rates to random fields, for which there are uncountably many hypothesis tests. We develop a method for finding regions in the field's domain where there is a

False Discovery Rates for Spatial Signals

A hierarchical testing procedure that first tests clusters, then tests locations within rejected clusters is developed and it is shown formally that this procedure controls the desired location error rate asymptotically, and conjecture that this is also so for realistic settings by extensive simulations.


Simulations show that error levels are maintained for nonasymptotic conditions, and that power is maximized when the smoothing kernel is close in shape and bandwidth to the signal peaks, akin to the matched filter theorem in signal processing.