Controlling the proportion of falsely-rejected hypotheses when
conducting multiple tests with climatological data
Valérie Ventura, Chris Paciorek, James Risbey
Revised April 2004
The analysis of climatological data often involves statistical
significance testing at many locations. While the field significance
approach determines if a field as a whole is significant, a multiple
testing procedure determines which particular tests are significant.
Many such procedures are available, most of which control, for every
test, the probability of detecting significance that does not really
exist. The aim of this paper is to introduce the novel 'false
discovery rate' approach, which
controls the false rejections in a more meaningful way.
Specifically, it controls a priori the expected proportion of
falsely-rejected tests out of all rejected tests; additionally the
test results are more easily interpretable. The paper also investigates the
best way to apply a false discovery rate (FDR) approach to spatially
correlated data, which are common in climatology. The most
straightforward method for controlling the FDR makes an assumption of
independence between tests, while other FDR-controlling methods make
less stringent assumptions. In a simulation study involving data
with correlation structure similar to that of a real climatological
data set, the simple FDR method does control the
proportion of falsely-rejected hypotheses despite the violation of
assumptions, while a more complicated method involves more
computation with little gain in detecting alternative hypotheses. A
very general method that makes no assumptions controls the proportion
of falsely-rejected hypotheses but at the cost of detecting few
alternative hypotheses. Despite its unrealistic assumption, based on
the simulation results, the authors suggest the use of the straightforward
FDR-controlling method and provide a simple modification that
increases the power to detect alternative hypotheses.
This tech report has been published. The full citation is:
Ventura, V., C.J. Paciorek, and J.S. Risbey. 2004.
Controlling the proportion of falsely-rejected hypotheses when conducting
multiple tests with climatological data. Journal of Climate 17:4343-4356.