Archive for the ‘free will’ Category

Just noticed this very attractive title by the Brafman brothers- The book, Sway — the irresistable pull of irrational behavior, “will challenge your every thought”, according to a NY Times review. And it gets similarly good reviews from other prominent people, like Michael Shermer, the author of the recent book The mind of the market, which I blogged about recently.

I found a couple of good videos on this book that’s good to share:

A longer version with more nuances can be seen here:

So after this, you get the idea: unconscious, automatic thought patterns act out and cause irrational behaviours, sometimes at the worst possible time and place.The questions raised are, of course, interesting and important. Why do we sometimes make horrific decisions, despite having all the information available to make better ones? Why do prominent people, like George W. Bush, suffer from loss aversion, leading to billions of dollars spent and thousands of lives lost? Because it’s “too late” to pull out? Because the pain of acknowledging defeat, error or insufficiency is bigger than the benefit of sparing yet more money and lives?

Other examples can be found at the Wall Street, military, aircraft captains, and even yourself. Maybe even on a daily basis. Taken together, the examples presented in these videos and the book demonstrate that we are all susceptible to make these kinds of errors. The next and better step is, of course, to identify these errors in ourselves (and others) and act upon them in time. Coaching, anyone?

I guess I should read the book, if the publishers will send me the book 8)


Read Full Post »

Can antidepressive medicine alter your decision behaviour? A recent paper in Science now demonstrates that alterations in subjects’ serotonin levels leads to significant changes in their decision making behaviour. In the study, subjects were set to play the Ultimatum Game repeatedly. Subjects had to do the task two times at two different days, and at one of the days they were administered an acute tryptophan depletion (ATD), i.e., their serotonin levels would drop for a period of time. The design was double-blind and placebo controlled.

The Ultimatum Game is an experimental economics game in which two players interact to decide how to divide a sum of money that is given to them. The first player proposes how to divide the sum between themselves, and the second player can either accept or reject this proposal. If the second player rejects, neither player receives anything. If the second player accepts, the money is split according to the proposal. The game is played only once, and anonymously, so that reciprocation is not an issue.

What the researchers found was that the ATD led subjects to reject more offers, but only unfair offers. That is, ATD did not interact with offer size per se, and there was no change in mood, fairness judgement, basic reward processing or response inhibition. So serotonin seemed to affect aversive reactions to unfair offers.

The study is a nice illustration of how we now are learning to induce alterations in preferences and decision making. Along with other studies using, e.g., oxytocin to increase trust in economic games (see also my previous post about this experiment), one may expect that increasing the serotonin level may actually make subjects less responsive to unfair offers.

This knowledge is also important to learn more about, as it poses a wide range of ethical problems. If our preferences and decisions are really influenced by these stimuli, can this be abused? It should be mentioned that many of these substances are not necessarily detected (oxytocin is odourless), so we may be influenced without our consent or knowledge. The wide applicances could include casinos, stores (e.g. for expensive cars), dating agencies and so on. If we did not accept subliminal messages in ads, how can we accept this?


Read Full Post »

In relation to our previous and well-visited post about oxytocin, we should mention a new study that uses this very substance in a neuroeconomic set-up. In the study, recently published by Neuron, and headed by Baumgartner et al., it was found that the administration of oxytocin affected subjects’ in a trust game. In particular, it was found that subjects that received oxytocin were not affected by information about co-players that cheated. Or, as put in the abstract:

(…) subjects in the oxytocin group show no change in their trusting behavior after they learned that their trust had been breached several times while subjects receiving placebo decrease their trust.

That is extremely interesting. This suggests that oxytocin, a mammalian hormone + neurotransmitter that is known to be related to maternal behaviour and bonding, also is modulating social trust. So the brain link is obvious. But what happens in the brain when oxytocin is administered during the trust game?

This difference in trust adaptation is associated with a specific reduction in activation in the amygdala, the midbrain regions, and the dorsal striatum in subjects receiving oxytocin, suggesting that neural systems mediating fear processing (amygdala and midbrain regions) and behavioral adaptations to feedback information (dorsal striatum) modulate oxytocin’s effect on trust.

So oxytocin reduces fear and aversion responses, and this leads to the lack of effect in responding to cheaters. Excellent, why not use this for treating anxiety, phobia and other fear-related problems? Sounds promising, and yet other more ethically problematic issues remain to be resolved. Think, for example, about whether oxytocin makes us more susceptible to gambling, shopping and marketing effects? Or what if it may work as the first scientifically proven aphrodisiac? What if your next pick-up line would be “Hi, I’m Thomas, how are you” just followed by a few ‘puff-puff’ sounds.

Joke aside, studies like this demonstrates that emotions and decisions are often influenced by factors not consciously available, or at least only partially so. As the marketing industry is increasingly interested in multi-sensory inventions, oxytocin may be the next step in this endaveour.


Read Full Post »

dennett.gifDan Dennett is interviewed by Robert Wright about his views on evolution and consciousness. Their views on evolution differ, especially with Wright's contention that evolution is goal-oriented in someway, and that history progresses in a predictable direction and points toward a certain end: a world of increasing human cooperation where greed and hatred have outlived their usefulness. All in the name of evolution — and game theory, that is. IMHO it's a lot of gibberish. Evolution is not teleology. It's a gross misunderstanding of the principles of evolution. I think Dennett does a nice job at pointing this out. Wright is not all ears, though.

On the second topic, consciousness, Wright and Dennett disagree profoundly. I'm not entirely sure whether Wright takes on the job as a Devil's advocate, or if he really means that epiphenomenalism is a logical possibility. I think the latter: Wright seems totally agnostic towards Dennett's thoughts. They simply won't penetrate Wright's mind.

If you listen carefully (with earphones, like me) you'll hear Dennett make a dozen sighs along the talk. I can understand why. It's a Sisyphean task to discuss these topics, and you're bound to run into people with the same scientific agnosticism or even atheism that hinders true progress in our understanding of topics such as evolution and consciousness.

You can find the interview here.


Read Full Post »

You know you are going to receive an electric shock. But you have the opportunity to choose whether you will receive the shock immediately or a bit later. What will you choose? Given the choice, many people choose to get it over with immediately. Theories about decision assume that the choice of early shock is due to a higher cost of waiting – feeling dreadful about the coming shock.

This week in Science a team of researchers led by Greg Berns used fMRI to study how the brain works when people dread an upcoming shock (see abstract). They compared mild and extreme dreaders while making the shock choice. Adding a twist to the standard choice, subjects were to choose between an immediate but higher voltage shock or a later, milder shock. Some individuals dreaded the outcome so much that, when given a choice, they preferred to receive more voltage rather than wait.

So what about the mild dreaders versus the extreme dreaders? In their study, Burns et al. found that even if there was no choice, extreme dreaders had an increase rate of neural activity in the posterior elements of what has been previously identified as a cortical pain matrix. This is illustrated in the following figure from the article:

Caption: Effect of voltage and delay on the brain response to the shock itself.

Pain has been known to invoke a widespread network of brain regions, including the cingulate cortex. This is shown through a meta analysis of neuroimaging studies of pain and the cingulate cortex using the very useful Brede database by Finn Årup Nielsen.

Burns et al. conclude in their article:

In addition to suggesting a neurobiological substrate for the utility of dread, our results have implications for another assumption of utility theory: the origin of preferences. It seems likely that an individual's relative preference for waiting for something unpleasant derives from previous experience. In our experiment, participants presumably had well-established preferences for waiting, although it is unlikely that they had previous experience with foot shocks. We thus observed the construction of waiting preference in the specific context of foot shocks without any choices being offered. That the activity patterns in the brain regions associated with the pain experience correlate with subsequent choices offers strong evidence for the existence of intrinsic preferences. Although it is not clear how malleable these preferences are, their existence may have health implications for the way in which individuals deal with events that are known to be unpleasant—for example, going to the doctor for painful procedures. The neurobiological mechanisms governing dreading behavior may hold clues for both better pain management and improvements in public health.


Read Full Post »

Neuroscience affects the way we think about ourselves. It affects how we think of normal and abnormal minds. It has influence on how people are judged according to law, whether they have been acting willfully or under the effect of psychoactive drugs, sleep disturbance, brain injury or psychiatric disease. But how do our scientific models relate to the way neuroscience is used in the courts?

In a new article in Nature Reviews Neuroscience, Nigel Eastman and Colin Campbell are giving a critical remark on how the law system is making use of neuroscience. They claim that:

(…) there is a profound mismatch of legal and scientific constructs, as well as methods, arising from their expression of different social purposes. More specifically, in terms of the stage that brain science is currently at, the law is unlikely, at least if it fully understands the science it is being offered, to prefer population based evidence of association of violence with biological variables, be they genetic or neuroimaging in nature, to psychological evidence that can suggest, even if not prove, mental mechanisms underlying commission of a particular offence.

Law and science are not using the same language. More to the point, judging a person on the basis of evidence that has been done on groups, is highly problematic, to put it mildly. This points to the very basis of one of my earlier blog remarks that doing group studies says little about our ability to put a single subject’s scan in one group or another.

So far so good. I completely agree. But at the very end, Eastman and Campbell make a strange conclusion:

Only if science were to achieve a very high level explanation of offending in terms of genetics or brain function might the position be altered. Perhaps fortunately, it seems likely that such explanation is a long way off. Indeed, some might say that, were we to achieve such a level of biological understanding of ourselves, we would have ‘biologically explained away personhood’, and have subsumed both legal and moral responsibility into biology.

Why would an explanation of the bio-basis of personality make us have less personality? Would a biological explanation of consciousness take our feelings away? I think this is a most strange assumption and it is academic BS! Let’s put it this simple: did the explanation of stars make them in any way less star-like? No.

So what are they really claiming? If you are a naturalist, like me, you actually do think that the mind is a direct result of what happens in the brain. And nothing else. By this view, the current science of the brain is incpmplete because we have not fully understood how this works. So if we find the biological solution to personhood, we have been able to describe what happens in the brain when we are conscious, acting individuals. And it will give us the means to explain what goes wrong when someone kills another person “unmotivated”, rapes a woman, steals and lies. We already know a bit about how this works and how it can go wrong. But this knowledge makes us not one single grain less human, does it? Actually, I’d say that the insights provides us the tools to intervene when something goes wrong, and to give the best possible treatment. In that sense, neuroscience is actually humanizing.

See also this transcript from “All in the mind” at Radio National.


Read Full Post »

In the wake of my post yesterday about US government attempts to build a workable lie detector for use in the war on terror, here is an article about Jonathan Moreno, bioethics adviser for the Howard Hughes Medical Institute, who has a book coming out later this year entitled Mind Wars: National Security and the Brain. A little teaser from the article:

One of the leaders in neuroscience development is the corporation DARPA, which is currently in the process of developing a “head web,” a helmet that conducts non-invasive brain monitoring that could be used to measure brain waves while soldiers are in combat. Moreno said the government is also working on developing a “war fighter”-a human manipulated by drugs to be a more efficient soldier. The “war fighter” would require less sleep, less protein and could heal itself with the aid of drugs and technology. The war fighters would eventually be replaced by robots, which would be controlled by human soldiers in a bunker somewhere out of harm’s way. “We are probably moving to a cyborg technology,” Moreno said, and one of the first steps toward a more robotic world is the use of neurologically manipulative drugs, like the “anti-conscience pill,” which can treat stress, reduce guilt and potentially eliminate entire memories, preventing psychological conditions like post-traumatic stress disorder.

Says Moreno:

“I don’t think the government will control our brains in the old-fashioned, ‘Manchurian Candidate’ sense, but we will eventually be able to change our brains.”

I found the link to the article at the excellent Neuromarketing Blog.

Read Full Post »

Lying seems to be the topic of the day. In the last month alone two popular articles have appeared covering recent attempts to unveil the brain signatures of lying. The first came out in the January issue of Wired. (You may find the electronic version here.) And today the NY Times Magazine follow up with their take on the story. Go read it here.

Both articles basically report the same story. Aftet 9/11 the American government has become highly interested in procuring a sure-fire method of spotting liers. The American military has a whole department, the Department of Defense Polygraph Institute (DODpi), working exclusively on inventing an easy-to-use device that in the future will be able to tell apart lies from the truth. Clearly such a device will have to be based on the ability to identify physical tell-tale signs that a person is lying. And to do so, DODpi will have to know the neural cause of lying. Reportedly more than 50 American labs are currently working on identifying these brain processes.

Not much, however, is known about the neurocognitive mechanisms underlying lying. Chances are that lying cannot be associated with just one “lie-module”. When lying you must be able to distinguish the lie from the truth; you will probably have to activate your ToM-system in order to organize your lie in accordance with what you think the other person knows and wants to hear; in some situations you have to remember what you have previously told other persons; you certainly have to plan ahead; and most probably you will have to control your emotional system. These cognitive mechanisms all rely on numerous neural processes.

So, what to do about structures that show up on statistical parametric maps in fMRI experiments? Dan Langleben – the first researcher to study lying with fMRI – have demonstrated that making a lie is associated with elevated activity in the anterior cingulate cortex. Yet, as I have previously noted in a post on this blog, the precise function of the ACC is still unclear. Thus, it may be the case that even though lying is associated with ACC activity, not all activity in the ACC is associated with lying! This opacity of the brain raises serious ethical questions, because will a DODpi-device made to detect ACC activity label some people liars who are not really lying? And will it, coversely, neglect liars who are lying using other neurocognitive mechanisms than just the ACC? These questions pose a serious challenge to the race for a neuroscience-based lie detector.

Another, more profound, ethical questions that this reserach raises is the following: do we really want to live in world without lying? Generally, lying is frowned upon. Yet, imagine if you had to tell the truth all the time. It is not only lawyers, such as the character Jim Carey plays in Liar Liar, that benefit from our ability to conceil our innermost thoughts and deceive. Lying plays an enormous role in human social life, some for bad, but some also for good. If lie detection devices should become succesful we will have to discuss when and where to use them. In the class room, at a job interview, in the minister’s office when we get married?


Langleben, D. et al. (2002): Brain activity during simulated deception: An event-related functional magnetic resonance study (PDF file). Neuroimage 15: 727-732.

Silberman, S. (2006): Don’t even think about lying. Wired 14.01.

Henig, R.M. (2006): Looking for the lie. New York Times Magazine. February 5, 2006.

Read Full Post »

One of the most exiting new fields of neuroscience is neuroeconomics. As the name indicate, this field investigates the decision-making processes that underlie economic behaviour. As was to be expected neuroeconomics is now spawning a off-spring called neurofinance. Why are some investors better at making money than others? The first neurofinance study, using brain imaging, was published in the September 1 issue of Neuron. Here’s the authors abstract:

Investors systematically deviate from rationality when making financial decisions, yet the mechanisms responsible for these deviations have not been identified. Using event-related fMRI, we examined whether anticipatory neural activity would predict optimal and suboptimal choices in a financial decision-making task. We characterized two types of deviations from the optimal investment strategy of a rational risk-neutral agent as risk-seeking mistakes and risk-aversion mistakes. Nucleus accumbens activation preceded risky choices as well as risk-seeking mistakes, while anterior insula activation preceded riskless choices as well as risk-aversion mistakes. These findings suggest that distinct neural circuits linked to anticipatory affect promote different types of financial choices and indicate that excessive activation of these circuits may lead to investing mistakes. Thus, consideration of anticipatory neural mechanisms may add predictive power to the rational actor model of economic decision making.

If you wish to prepare yourself for the reading of Camelia Kuhnen and Brian Knutson’s paper go to Bloomberg.com here where you will find a nice journalistic take on the whole neurofinance phenomenon. It can more or less be summed up in the statement from Daniel Kahneman, quoted in the article:

“The brain scientists are the wave of the future in the financial world”.



Kuhnen, C. & Knutson, B. (2005): The neural basis of financial risk taking. Neuron 47: 763-770.

Levy, A. (2006): Brain scans show link between lust for sex and money. Bloomberg.com. February 1.

Read Full Post »

The insights from brain science has the potential to alter the making and practice of law. But how and why? What is so special about brain science that gives it this potent source of change?

Let’s reverse that question by asking: what is so good about our current models about human thought, motivation and behaviour that makes us certain that our laws reflect the most correct view of human behaviour? I thought so; I don’t feel the slightest confident that our current models of the mind are merely good enough (by our scientific standards).

Luckily, our models are improving — from day to day, some would say. It’s definitely not a linear progress, IOW that each new publication makes an added improvement to our understanding. The battle of theories are still dominating the field, so whether you choose to go with Damasio or Rolls on the issue of decision making, it will have an influence on the laws you make. But whatever use we make of such models, be it law systems, educational practices or child rearing, we should use the most up to date and most supported models.

This is suggested in a thorough and comprehensive (and very long) article by Owen Jones and Timothy Goldsmith. Jones and Goldsmith argue that better understanding of the biology of behaviour makes better laws. I won’t brag with reading the entire document, but I will do. If I stumble across anything especially important (which is likely) I’ll drop a note.

Here is the abstract. Get the full article here (PDF).
See also a story in Medical News Today

Owen D. Jones & Timothy H. Goldsmith

Society uses law to encourage people to behave differently than they would behave in the absence of law. This fundamental purpose makes law highly dependent on sound understandings of the multiple causes of human behavior. The better those understandings, the better law can achieve social goals with legal tools.

In this Article, Professors Jones and Goldsmith argue that many long-held understandings about where behavior comes from are rapidly obsolescing as a consequence of developments in the various fields constituting behavioral biology. By helping to refine law’s understandings of behavior’s causes, they argue, behavioral biology can help to improve law’s effectiveness and efficiency.

Part I examines how and why law and behavioral biology are connected.
Part II provides an introduction to key concepts in behavioral biology.
Part III identifies, explores, and illustrates a wide variety of contexts in which behavioral biology can be useful to law.
Part IV addresses concerns that sometimes arise when considering biological influences on human behavior.

Read Full Post »