Natural Rationality | decision-making in the economy of nature


Brain and Politics: Anatomy of a Scandal

image source: Wired

I was not alone to find something weird with the imaging study published in the NYT.

I tried to synthesize here the main arguments about the study. This could be interesting for researchers interested in social epistemology

And if you are interested in more serious research, see this (less sexy) paper:

Ballew, C. C., & Todorov, A. (2007). Predicting political elections from rapid and unreflective face judgments. Proceedings of the National Academy of Sciences, 104(46), 17948-17953.

1. Unwarranted claims

As cognitive neuroscientists who use the same brain imaging technology, we know that it is not possible to definitively determine whether a person is anxious or feeling connected simply by looking at activity in a particular brain region.. (Aron et al.)

2. “Just so” stories, unwarranted interpretation of the results

brain regions are typically engaged by many mental states, and thus a one-to-one mapping between a brain region and a mental state is not possible. (Aron et al.)
The scattered spots of activation in a brain image can be like tea leaves in the bottom of a cup – ambiguous and accommodating of a large number of possible interpretations. The Edwards insula activation might indicate disgust, but it might also indicate thoughts of pain or other bodily sensations or a sense of unfairness, to mention just a few of the mental states associated with insula activation. And of course the possibility remains that the insula activation engendered by Edwards represents other feeling altogether, yet to be associated with the insula. The Romney amygdala activation might indicate anxiety, or any of a number of other feelings that are associated with the amygdala – anger, happiness, even sexual excitement. (Farah)
Didn’t I just say that the amygdala is involved in positive emotions, too? So what does amygdala activation mean, then? Studies have also shown that mere emotional uncertainty (e.g., a neutral face) may activate the amygdala. (Ramsoy)

3. Unwarranted process

(...) the peer review process is critical to understanding whether the data are sound or based on faulty methodology. Unfortunately, the results reported in the article were apparently not peer-reviewed (...) we are distressed by the publication of research in the press that has not undergone peer review. (Aron et al.)

Their imaging study has not been published in any science journal, nor has it been vetted by experts in the field; it can't rightly be called an "experiment," since the authors weren't testing any particular hypothesis; and the arbitrary conclusions they draw from the data aren't even consistent with their own previous research. (Engber)

we don’t have a scientific reference, and only have to take the authors’ word for it. It’s a violation of every sensible way to report findings from a scientific method in the press. IMO, before you can do such a thing, you should at least (!) have a manuscript that is accepted, let alone published. And if you choose to do a test for the media, just “for fun”, then say so! This article pretends to be scientifically correct. It is not." (Ramsoy)

How can we tell whether the interpretations offered by Iacoboni and colleagues are adequately constrained by the data, or are primarily just-so stories? By testing their methods using images for which we know the “right answer.” If the UCLA group would select a group of individuals for which we can all agree in advance on the likely attitudes of a given set of subjects, they could carry out imaging studies like the ones they reported today and then, blind to the identity of personage and subject for each set of scans, interpret the patterns of activation. (Farah)

And as we reported in 2006, similar nonsense was repeated with the Super Bowl ads, by (guess who) the same team.None of these studies have ever been published in scientific journals so why does Iacoboni, who does lots of respectable cognitive neuroscience, keep running these essentially meaningless studies? (Bell)

4. Absence of experimental details

problems of interpretation with brain imaging studies can be avoided only by careful experimental design (...) [no] sufficient detail [were] provided to evaluate the conclusions." (Aron et al.)

Iacoaboni's team were on the front page of the NYT in 2004 with almost exactly the same stunt - attempting to use brain scans to predict responses when viewing political campaign ads.
The 'study' details have mysteriously gone from the web but are still archived if you want to see history repeating itself. (Bell)

the authors look at brain blobs and try to interpret their meanings in terms of previous knowledge. Is that bad? Yes it is, because it does not even attempt toput up testable hypotheses. And why don’t we get to know what is meant by “more active” or “respond more strongly”? What is this activation compared to? What is the contrast, the baseline? Even further, what is the statistical cutoff and how many other regions light up during conditions X, Y or Z? Where are all the tech specs that validate this study? (Ramsoy)

5. Dubious source and possible interest conflicts (FKF Applied Research, a neuromarketing firm)

The study comes straight from FKF Applied Research, a D.C.-based "neuromarketing" firm that conducts brain-based focus groups for Fortune 500 companies. For the past two years, FKF has finagled widespread coverage of its business by conducting spurious fMRI analyses of Super Bowl commercials and then announcing the winners and losers. (See, for example, "This Is Your Brain on a Super Bowl Ad.") Business Week, Time, Reuters, and MSNBC have all boosted the company's bottom line with free publicity, but no publication has been nearly as generous as the Times. To date, the paper has published eight articles about the company (including one on the front page) since it was founded three years ago. And now, as of Sunday, the Times has gone so far as to run two op-ed columns by FKF's Josh Freedman with exactly the same title. In neither case did the newspaper disclose his connection to the firm.
As the authors of what is essentially an extended FKF advertorial, Freedman and his colleagues have a strong incentive to tout their services and sex up the findings. (Engber)

All of these stunts are essentially PR for FKF Applied Research, a 'neuromarketing company' who will carry out bespoke brain scan marketing studies for a price. Iacoboni is not listed as a staff member but he's been associated with most of their previous media stunts and four out of five FKF staff are co-authors on the NYT article. We can bet there's some pretty strong connection there. (Bell)

6. Confused presentation of the data

in the brain states of subsets of the subjects, for example just the men or just the most negative voters. Some concern the brain states of the subjects early on in the scan compared with later in the scan. Some concern responses to still photos or to videos specifically (farah)

7. Confused conclusion

many of their conclusions seem either haphazard or comically vague. Take their first point: When test subjects were shown the name of a political party—either the words Republican, Democrat, or Independent—they responded with neural activity in the amygdala, the insula, and the striatum. According to the authors, these regions of the brain correspond to feelings of anxiety, disgust, and pleasure. Really, all three? From that meaningless mishmash of emotions, they meekly conclude that "voters sense both peril and promise in party brands." (Engber)

8.Contradictions with their previous study

A look back at the findings from 2004 casts doubt on their other conclusions as well. In 2007, activation of the superior temporal sulcus and the inferior frontal cortex was deemed a good sign for Fred Thompson—he was inspiring empathy from prospective voters. But in the previous study, activation of the same so-called "mirror neuron system" occurred only when voters viewed candidates of the opposing party, whom they despised. Likewise, when brain scans turned up relatively little activity in response to images of Barack Obama and John McCain, the authors concluded that these candidates "have work to do." But similar data from the 2004 experiment suggested just the opposite: Highly partisan voters showed much less brain activity when presented with the candidates they supported.
Across two analogous studies, the FKF team has interpreted the very same patterns of brain activity in very different ways—indeed, in opposite ways. (Engber)

9. Dubious motives

The fact that the UCLA study involved brain imaging will garner it more attention, and possibly more credibility among the general public, than if it had used only behavioral measures like questionnaires or people’s facial expressions as they watched the candidates. Because brain imaging is a more high tech approach, it also seems more “scientific” and perhaps even more “objective.” Of course, these last two terms do not necessarily apply. Depending on the way the output of UCLA’s multimillion dollar 3-Tesla scanner is interpreted, the result may be objective and scientific, or of no more value than tea leaves.(Farah)

This Is Your Brain on Politics

Iacoboni et al, NYT

Politics and the Brain
(Aron et al., a group of 17 neuroscientists), NYT

This is Your Brain on Politics? (Farah Guest Post)
Martha Farah, Neuroethics & Law Blog

Neuropundits Gone Wild! Befuddling brain science on the opinion pages of the New York Times.
Daniel Engber, Slate

The death of critical science journalism in NY Times?
Thomas Ramsoy, Brainethics blog

Election brain scan nonsense
Vaughan Bell, Mind Hacks