Permutation methods are a class of statistical tests that, under minimal assumptions, can provide exact control of false positives (i.e., type I error). The central assumption is simply that of exchangeability, that is, swapping data points keeps the data just as likely as the original. With the increasing availability of inexpensive large-scale computational resources and openly shared, large datasets, permutation methods are becoming popular in neuroimaging due to their flexibility and ease of concern about yielding nominal error rates than parametric tests, which rely on assumptions and/or approximations that may be difficult to meet in real data. This becomes even more important in the presence of multiple testing, in that assumptions may not be satisfied for each and every test, and the correlation across tests may be difficult to account for. However, even exchangeability can be violated in the presence of dependence among observations, and it may not always be clear what to permute. The aim of this blog post is to emphasize the relevance of linking the null hypothesis and the dependence structure within the data to what should be shuffled in a permutation test. We provide a few practical examples, and offer some glimpses of the theory along the way.
Example 1: Permutation mechanics Let’s begin by reviewing the mechanics of a permutation test. Consider a comparison between two groups, for example whether hippocampal volume is different between subjects with Alzheimer’s disease (AD) and demographically matched cognitively normal controls (that is, a group with similar age, sex, education level, etc). If we assume that in both groups the hippocampal volumes are independent samples from a Gaussian distribution, a classical parametric two-sample t-test can be used to test for a difference between means of the two groups. However, this distributional assumption may not be true, and departures from this assumption can potentially lead to incorrect conclusions. In these circumstances, permutation tests perform better than parametric tests by providing a valid statistical test with much weaker assumptions. Specifically, under the null hypothesis that the hippocampal volume has no actual difference between AD cases and controls, the group membership (or the label of case and control) becomes arbitrary, that is, any subject from one group might as well have been from the other.
4 Comments
At its best, multi-modal imaging offers rich insight into a many aspects of brain structure & function. At the same time, its development has been thwarted by challenges, for example simultaneous EEG-fMRI has additional safety concerns, and the EEG data requires extra analysis steps to account for artifacts from the magnetic field and rapidly changing field gradients. Despite these issues, there is increasing attention to the merits of this approach, with high profile journals dedicating special issues to multi-modal data fusion.
To find out about the promises and pitfalls of multi-modal imaging, we sent a series of questions to members of the OHBM Multi-Modal Imaging Task Force. This team is comprised of experts in different imaging domains, and aims to promote and develop multi-modal imaging. We found out the state of the field from Alain Dagher, neurologist and PET/fMRI expert in the Montreal Neurological Institute, Urs Ribary, cognitive neuroscientist and EEG/fMRI expert in British Columbia, Gitte Knudsen, neurologist and translational neurobiologist at Copenhagen University, and Shella Keilholz, physicist and fMRI expert at Emory University and Georgia Tech. |
BLOG HOME
Archives
January 2024
|