Feeds:
Posts
Comments

Archive for June, 2006

Today’s Science carries a review of what appears to be an interesting book. (I haven’t read it myself, so I am relying on the reviewer here.) The book is Campaigning for Hearts and Minds by Ted Brader, an assistant professor at the University of Michigan. It deals with political advertisements, and how they work to target voters’ emotions. Brader analyses how subtle cues, such as music or brightly coloured images of children, can frame the ads’ contents in specific ways. And, following this analysis, he argues that “enthusiastic” and “fear inducing” ads elicit different mental reactions in whose who watch them. In the words of James Druckman, the reviewer:

Enthusiastic ads motivate individuals to participate (e.g., willingness to volunteer, intention to vote), and once participating, these individuals are likely to become even more committed to their prior preferences. The implication is that enthusiasm leads to political polarization by pushing voters to take action on behalf of their prior convictions. Fear ads have less particiatory power – although to some extent they motivate sophisticated individuals. But, fear can open the gates of persuation, and these ads tend to cause individuals to consider new information and possibly change their political preferences.

I cannot see from the review whether or not Brader considers the now vast neurobiological literature on preference formation and decision-making. But it would be an obvious thing for political scientists to do so. In 2003 the journal Political Psychology (Vol 24, Issue 4) attempted such an integration but I am not sure it has had a lot of impact on political science yet. For the rest of us, the main lesson is to turn off the tube when those attack ads come on!

References

 

Brader, T. (2006): Campaigning for hearts and minds. University of Chicago Press.

Druckman, J. (2006): Stroking the voters’ passions. Science 312: 1878-1879.

Winkielman, P. & Berridge, K. (2003): Irrational wanting and subrational liking: How rudimentary motivational and affective processes shape preferences and choices. Political Psychology 24: 657-680.

Lieberman, M.D., Schreiber, D. & Ochsner, K.N. (2003): Is political cognition like riding a bicycle? How cognitive neuroscience can inform research on political thinking. Political Psychology 24: 681-704.

-Martin

 

Read Full Post »

neuroethics1.jpgThe other day we got an invitation from Martha Farah to join the soon-to-be launched Neuroethics Society. Of course we’re going to join! Here is part of the email:

Why do we think it is worth forming a society at this time? As you know, people interested in neuroethics have been interacting through the occasional meeting or conference symposium, but have not so far participated in any larger scale or more permanent organization. We believe that such an organization is needed to promote the kind of sustained interaction, learning and critical discussion that will strengthen our field. It is also needed to help draw new people into the field, a critical next step for continued progress in the field.

How the society will develop remains to be seen. Indeed, we are hoping for your active participation in the process. However, for the sake of starting somewhere, we created a provisional system of governance, plans for the next two years’ meetings, and the following Mission Statement:

We are an interdisciplinary group of scholars, scientists and clinicians who share an interest in the social, legal, ethical and policy implications of advances in neuroscience. The late 20th century saw unprecedented progress in the basic sciences of mind and brain and in the treatment of psychiatric and neurologic disorders. Now, in the early 21st century, neuroscience plays an expanding role in human life beyond the research lab and clinic. In classrooms, courtrooms, offices and homes around the world, neuroscience is giving us powerful new tools for achieving our goals and prompting a new understanding of ourselves as social, moral and spiritual beings. The mission of the Neuroethics Society is to promote the development and responsible application of neuroscience through better understanding of its capabilities and its consequences.

Steve Hyman of Harvard has agreed to be our first President. Other information about governance and initial meetings is listed on the society website, neuroethicssociety.org, which should be live any day now!

Seems like the site is already up, and that content is being added continously. The society plans to do sattelite meetings to the 2008 and 2009 Cognitive Neuroscience Society conferences in San Francisco and New York, respectively. Hopefully, they’ll do a meeting in 2006 or 2007. We hope that we’ll be able to join in.

See also this brief article (PDF) that the Society links to. It’s noteable that one of the new areas urging the need for neuroethics is the claimed use of neuroimaging in lie detection.

-Thomas

Read Full Post »

The good people at the Dana Press have been kind enough to send us an advance copy of Jonathan Moreno’s forthcoming book, Mind Wars. Since it is first scheduled for publication in November (in the US; December in Europe) I will wait a few months before recording my thoughts about it here on the blog, but rest assured that you wiil hear about this book again. Mind Wars is the first book, as far as I know, to survey the American military’s use of, and involvement in, neuroscience research. Besides being a highly interesting topic in itself, this also makes the book the first neuroethics tome to really give an in depth picture of just how seriously Government and other parties are considering using neuroscience insights and methods to monitor and alter our brains. I predict it will make quite a splash in the press when November comes.

For now I want to highlight a concept much toted by Moreno: dual use. Briefly, “dual use” refers to the idea that much brain research may benefit both medicine and military use. Consider brain-machine interface research. Clever devices by which a brain damaged patient, for instance patients with locked-in syndrome, can be made able to manipulate computers or protheses also can be used by soldiers to manipulate weapons and other war related artefacts. When promoting projects or seeking funding many military researchers therefore make a big deal out of stressing how putative results may turn out to benefit health care as much as warfighting. As Moreno points out a rather big chunk of the US neuroscience effort is either directly or indirectly funded by such dual use programmes. Also, even research with no overt military funding, and with no apparant relevance to warfighting, may end up being exploited by military authorities, as neuroscience journals are routinely perused by military authorities. Perhaps this knowledge ought to give pause to brain researchers. On the other hand, it should also be noted that it is often only because of money coming from the military’s budgets that important research is being done in the first place. So the ethical connundrum of dual use is not quite that simple.

In reality, when it comes to neuroscience, “dual use” should probably be a very general and basic concept. It is hard to imagine any result stemming from our inquiry into the brain – perception, memory, motor control, etc. – which cannot be used both for both “good” and “bad”. Of course, you could say the same about many other forms of scientific results. But being the seat of the soul the brain is really the dual use organ par excellence, and we ought to think more about the problems dual use raises.

-Martin

Read Full Post »

prison.gifOur willingness to engage in punitive acts is a key part of our society. So claims a recent article in Science. Through the experiments of Milgram, Asch, Zimbardo, and Sherif psychologists have studied humans' engagement in costly social relationships with non-kin. With many of these experiments being done in students only, it has been hard to extrapolate the results to the entire population. Understanding different cultures through these experiments is even worse.

In this week's Science report, a team of scientists studied social interaction in different cultures, using three different social psychology experiments. The first was an ultimatum game:

(…) two anonymous players are allotted a sum of real money (the stake) in a one-shot interaction. The first player (player 1) can offer a portion of this sum to a second player, player 2 (offers were restricted to 10% increments of the stake). Player 2, before hearing the actual amount offered by player 1, must decide whether to accept or reject each of the possible offers, and these decisions are binding. If player 2 specified acceptance of the actual offer amount, then he or she receives the amount of the offer and player 1 receives the rest. If player 2 specified a rejection of the amount actually offered, both players receive zero. If people are motivated purely by self-interest, player 2s will always accept any positive offer; knowing this, player 1 should offer the smallest nonzero amount. Because this is a one-shot anonymous interaction, player 2's willingness to reject provides one measure of costly punishment, termed second-party punishment

The second game was a party punishing game (PDF):

(…) two players are allotted a sum of real money (the stake), and a third player gets one-half of this amount. Player 1 must decide how much of the stake to give to player 2 (who makes no decisions). Then, before hearing the actual amount player 1 allocated to player 2, player 3 has to decide whether to pay 10% of the stake (20% of his or her allocation) to punish player 1, causing player 1 to suffer a deduction of 30% of the stake from the amount kept. Player 3s punishment strategy is elicited for all possible offers by player 1. For example, suppose the stake is $100: if player 1 gives $10 to player 2 (and keeps $90) and player 3 wants to punish this offer amount, then player 1 takes home $60; player 2, $10; and player 3, $40. If player 3 had instead decided not to punish offers of 10%, then the take-home amounts would have been $90, $10, and $50, respectively. In this anonymous one-shot game, a purely self-interested player 3 would never pay to punish player 1. Knowing this, a self-interested player 1 should always offer zero to player 2. Thus, an individual's willingness to pay to punish provides a direct measure of the person's taste for a second type of costly punishment, third-party punishment.

The third game was a dictator game:

The [dictator game] is the same as the [ultimatum game] except that player 2 cannot reject. Player 1 merely dictates the portions of the stake received by himself or herself and player 2. In this one-shot anonymous game, a purely self-interested individual would offer zero; thus, offers in the [dictator game] provide a measure of a kind of behavioral altruism that is not directly linked to kinship, reciprocity, reputation, or the immediate threat of punishment.

Regardless of culture, the findings showed that the two measures of costly punishment produced an increasing proportion of individuals choosing to punish as offers approach zero. But there were substantial cultural differences also, especially in terms of people's overall willingness to punish unequal offers. In some cultures, offers as low as 10% were accepted without punishment, while other cultures were less inclined to reject such a deal.

How do these cultural differences come to be? Is there a relationship between people's willingness to share (altruism) and a culture's level of costly punishment? The researchers plotted the relationship between the minimal offers that cultural groups were to accept (x axis) and the mean offer from the dictator game (y axis):

punishment.gif
These results demonstrate that there is a positive relationship between the likelihood of accepting an offer (i.e. the level of willingness to punish small offers) and the willingness to share (i.e. altruism). In other words, in cultures where you are expected to share, you give more, even thought others have no way to threaten or punish you.

The researchers conclude:

These three results are consistent with recent evolutionary models of altruistic punishment. In particular, culture-gene coevolutionary models that combine strategies of cooperation and punishment predict that local learning dynamics generate between-group variation as different groups arrive at different "cultural" equilibria. These local learning dynamics create social environments that favor the genetic evolution of psychologies that predispose people to administer, anticipate, and avoid punishment (by learning local norms). Alternative explanations of the costly punishment and altruistic behavior observed in our experiments have not yet been formulated in a manner that can account for stable between-group variation or the positive covariation between altruism and punishment. Whether the co-evolution of cultural norms and genes or some other framework is ultimately correct, these results more sharply delineate the species-level patterns of social behavior that a successful theory of human cooperation must address.

-Thomas

Read Full Post »

psychopath.jpgIt’s a long shot, I know. We’ll never see a Nature Neuroethics or a Trends in Neuroethics. But this week’s issue of Nature caught me surprised with the release of two articles on ethical aspects of neuroscience. It really demonstrates how hot and important this issue is.

Basically, both articles are on the application of brain scanners to detect lies. The first article is a bit broader in its scope, though. Here, the Nature editor looks more generally at the ethical discussions – or lack of such – in the neuroscience community. While other scientific branches, e.g. genetics, have made ethics part of their curriculum, neuroscience is lagging behind.

From the article:

Neuroscientists have reasons for their reluctance to wade into ethics. The questions raised are likely to be open-ended, and their arrival in the world outside the laboratory may be some way off. Whereas a genetic test can say something definitive about a particular genetic make-up, and therefore about predisposition to disease, for example, an fMRI scan is just an indirect measure of neural activity based on oxygenated blood flow. For now, neuroscientists have only the most basic grasp of what this says about how the brain processes information.

Is neuroscience really lagging behind? Is it not unfair to compare the ethical discussions following neuroscientific findings and genetics? While modern genetics has the better part of a century, neuroscience is basically in its infancy. In fact, do we really know with great certainty what we are looking at with the functional MRI scan? Well’, we know it’s a mixture of blood oxygenation, vasculare response and actions, but having the full understanding of what an activation blob really means is a different matter. Yes, your orbitofrontal cortex is lighting up when you’re lying. But why? And how? What does it signify?

Unless we have a clear answer, the message will be less clear and the implications will drown in technicalities.

The second article concerns a specific topic – lie detection – and I’m afraid I’ll muddle the waters a bit on this issue. The background is that two companies – Cephos and No Lie MRI – are founded to use MRI scans in order to detect lies. Martin has blogged about lie detection studies previously. Here, I’d like to remind you about a previous blog entry I did on the problems of doing group studies. Basically, the results we can find on a group level cannot be found in the individual scans. In group studies, we’re looking at a mean effect. Does that mean that a person with very high activation of the orbitofrontal cortex is a pathological lier? No! Mind you: two persons can have a very different BOLD fMRI signal, our measuring unit. It can be dependent upon several factors, such as hours of sleep, the vascular system and caffeine & nicotine use. Even within the same person, we find day to day (and hour to hour) changes in the baseline BOLD signal. So it’s indeed very hard to move from a group level to an individual. At this stage, I think it’s impossible – and it should be avoided.

From the article you can read that Judy Illes says something similar:

Until we sort out the scientific, technological and ethical issues, we need to proceed with extreme caution.

Better still, Sean Spence of the University of Sheffield, UK says

On individual scans it’s really very difficult to judge who’s lying and who’s telling the truth.

Finally, the same problems with the polygraph persist: we don’t know what a lie really is, why people lie, and we won’t catch those who don’t think that they are lying. Today, doing any kind of lie detection is a risky business. And I wouldn’t put my buck at Cephos or No Lie MRI. Honestly.

-Thomas

Read Full Post »

After being long underway, the study by Schaefer et al on how we perceive familiar car brands is finally out, in NeuroImage. Basically, they showed different car brands to subjects; some brands were culturally well-known to the (European) subjects, while other brands were less/not known. Known brands included BMW and Mercedes-Benz, unknown brands included Buick and Acura.

The main study was comparing the activity when looking at known brands from unknown brands. This gave a significant activity in the medial prefrontal cortex.

The strange thing about this study – and IMO something that confuses the interpretation – is the task that the subjects were to perform:

(…) subjects were instructed that they will see logos of familiar and unfamiliar car manufacturers and that they should imagine using and driving a product of the brand they see. If they would see a logo of a car manufacturer they did not know, they should imagine driving and using a generic car.

So here I lie looking at a brand of Acura, knowing that it's a car, and then trying to imagine how it is to drive this car. I have no idea how it looks, and I might even let myself imagine quite a bit how it looks. My first impression about Acura is a "made in Taiwan" fringy feeling. Not a good car. So I imagine driving some kind of uncomfortable car that can barely take my 193 cm and 100 kilos, tilting ot one side … and so on. Then I get the Porche logo and think straight away about that particular Porche my neighbour has. Now I'm driving it, with all it's neat features and noisless interior.

car.jpg

 

My point is here: known brands don't require deliberation per se. You automatically think about one particular car type, it's visualized immediately. The unknown car brand at most produces some kind of general car-ish automobile, and you have to deliberate more on how you're driving it. So there is a definite visual imagery difference between the two conditions.

In order to circumvent this problem one should rather use all kinds of known product brands that are known, e.g. including Fiat, Opel and Renault. This would make it possible to compare the effect of known vs. unknown brands, and avoid the muddled brand by socioeconomic status confusion.

The researchers furthermore argue that "it seems that the imagination of driving a familiar car had led the subjects to develop self-relevant thoughts". Sorry, what does that mean? Self-relevant thoughts? So my imagination about driving an uncomfortable car was not a self-relevant stream of thoughts? I strongly oppose such a speculation. What is even more problematic with such a claim is that there are no reports about what the subjects actually thought during the study. So even speculating is very problematic. Which leads me to think about a recent article by Friston and colleagues (PDF) that criticizes this very tenet in neuroimaging studies.

The authors speculate that "the way how brands affect our behavior can be described with the idea of somatic markers based on the theory of Damasio". Following the connectivity between the amygdala and the medial prefrontal cortex, this is a likely suggestion. However, bear in mind that the somatic marker hypothesis is very problematic, and researchers such as Edmund Rolls is fighing against it with very good evidence.

Here is the abstract:

Brands have a high impact on people's economic decisions. People may prefer products of brands even among almost identical products. Brands can be defined as cultural-based symbols, which promise certain advantages of a product. Recent studies suggest that the prefrontal cortex may be crucial for the processing of brand knowledge. The aim of this study was to examine the neural correlates of culturally based brands. We confronted subjects with logos of car manufactures during an fMRI session and instructed them to imagine and use a car of these companies. As a control condition, we used graphically comparable logos of car manufacturers that were unfamiliar to the culture of the subjects participating in this study. If they did not know the logo of the brand, they were told to imagine and use a generic car. Results showed activation of a single region in the medial prefrontal cortex related to the logos of the culturally familiar brands. We discuss the results as self-relevant processing induced by the imagined use of cars of familiar brands and suggest that the prefrontal cortex plays a crucial role for processing culturally based brands.

-Thomas

Read Full Post »

evo-psychi.jpgThe last issue of the journal Progress in Neuro-Psychopharmacology and Biological Psychiatry has a very interesting special issue on how evolutionary theory and psychology can be integrated into psychiatric theory about mental disease. It’s also called “evolutionary psychiatry“.

OK, you might ask; we can accept that evolutionary theory can explain why we behave as we do today, i.e. normal cognition and behaviour. But psyhiatric disease? Is this an attempt to explain the evolution of a disease? The straightforward answer is a resounding “no”. Evolutionary approaches to disease – including mental disease – is an attempt to describe and explain the design characteristics that make us susceptible to the disease (from Nesse & Williams, 1996). The evolutionary trajectories of humans is far from a travel towards perfection. We are full of errors and somatic and mental shortcomings – and the appendix, near-sightedness, and a bottleneck attentional system and the like are examples of this.

Another important issue is that the border between normal and abnormal psychology is becoming increasingly muddled. That may sound as a problem, but it’s actually caused by a change in our understanding of how our minds come to be, and especially how normal variation extends into pathological domains. In this sense, it’s hard to draw waterproof boundaries between normal and abnormal psychology. We work on a continuum, and the branch of modern evolutionary psychiatry makes a good case for such an approach.

Here is the introduction by Dan J. Stein:

Darwin wrote extensively on the implications of evolutionary theory for understanding human psychology, and made a special effort to gain access to observations on the insane. Pioneering ethologists, such as Tinbergen, emphasized the value of their work for understanding psychiatric disorders. Today, evolutionary psychology has become an accepted scientific field, and the area of evolutionary psychiatry has similarly shown increasing maturity. This special issue of “Progress in Neuropsychopharmacology and Biological Psychiatry” provides an opportunity to review some of these developments.

An introductory paper by Stein summarizes some of the history of evolutionary psychiatry in general, and considers its application to psychopharmacology in particular. Panksepp then outlines a number of conceptual issues at the heart of evolutionary psychiatry, arguing that there are flaws in much of the work that has been undertaken in evolutionary psychology and psychiatry, and proposing a novel way forwards. These introductory papers emphasize the possibility that the focus of evolutionary psychiatry on distal mechanisms can be usefully integrated with advances in our understanding of the proximal mechanisms that underpin psychopathology.

Several of the contributions in this issue go on to exemplify the application of the constructs of evolutionary psychiatry to particular mental disorders. Crow and Burns put forward different evolutionary approaches to schizophrenia, Allen and Badcock review evolutionary approaches to depression, Bracha focuses on the anxiety disorders, Feygin and colleagues on obsessive-compulsive disorder, and Baron–Cohen on autism. Although there are important challenges for evolutionary psychiatry going forwards, as the sophistication of its concepts and methods increases, so likely will its influence on the understanding of psychopathology.

A number of authors in the area of evolutionary psychiatry are also interested in more fundamental issues in evolution of brain and behavior. Thus, Crow is crucially concerned with the evolution of language, Burns emphasizes the evolution of social cognition, Baron–Cohen’s work touches on the importance of gender differences in systematizing and empathizing abilities, and Feygin and colleagues’ work addresses the evolutionary bases of religious ritual and loving attachment. Indeed, there is an argument that fundamental progress in evolutionary theory will be reliant on the resolution of key questions about mind and its illnesses.

-Thomas

Read Full Post »

Older Posts »