I just love this title: Independence in ROI analysis: where is the voodoo? by Russel Poldrack and Jeanette Mumford. Lately, there has been much fuzz about so-called voodoo correlations in social neuroscience, and questions have been raised about the legitimacy of the claims put forward by Vul et al. (PDF of in press manuscript) and see criticism from Tania Singer and other colleagues here (PDF). Crucial to the criticism by Vul et al. is that the correlations between behavioural/social data and brain activation data show a correlation level that is highly improbable, or as Vul et al. put it:
Functional Magnetic Resonance Imaging studies of emotion, personality, and social cognition have drawn much attention in recent years, with high-profile studies frequently reporting extremely high (e.g., >.8) correlations between behavioral and self-report measures of personality or emotion and measures of brain activation. We show that these correlations often exceed what is statistically possible assuming the (evidently rather limited) reliability of both fMRI and personality/emotion measures. The implausibly high correlations are all the more puzzling because method sections rarely contain sufficient detail to ascertain how these correlations were obtained.
In the recent paper in SCAN, Poldrack and Mumford add to the criticism of Vul et al.:
We outline the problem of non-independence, and use a previously published dataset to examine the effects of non-independence. These analyses show that very strong correlations (exceeding 0.8) can occur even when the ROI is completely independent of the data being analyzed, suggesting that the claims of Vul et al. regarding the implausibility of these high correlations are incorrect.
I am not going to give you the details of the whole voodoo story here, since it has been covered nicely in the blogosphere. But what is interesting is that this discussion demonstrates just how easy it is to get pre-publication media and blogosphere coverage for researchers criticising brain research, much in the same vein as we saw in the case of criticism of neuroeconomics by Gul and Pesendorfer’s neo-conservative “A case agains mindless economics” (PDF), long before the article actually appeared… I guess in the end that as neuroscience and neuroimaging in particular becomes even more popular, it will always have the extremists of either side, saying either that it shows nothing/too much/fake information, or the wholesale version that lets people get away with anything, as long as they point to a brain…
-Thomas
The other thing you would expect as imaging studies become more prolific (if that’s possible – >6/day?) is more false positives, results where the findings should be null. Because of small samples, where statistical significance is met, correlations will be high. Because there is so much work being done, there should be plenty of statistically significant findings, whether the field is rich and full of good results, or null. The Vul paper covers a little of this, but Ioannidis, ‘Why most published research findings are false’ (2005 – PLoS Med) is a very good cautionary note for anyone getting too excited about imaging. If an fMRI measure correlates better with a psychological measure than the test-retest validity of the psychological method, this almost has to be an abberation. If not, we live in a strange world with localised cartesian theaters each having strange time characteristics.
[…] Brain Ethics Blog artice […]