Archive for the ‘free will’ Category

Just noticed this very attractive title by the Brafman brothers- The book, Sway — the irresistable pull of irrational behavior, “will challenge your every thought”, according to a NY Times review. And it gets similarly good reviews from other prominent people, like Michael Shermer, the author of the recent book The mind of the market, which I blogged about recently.

I found a couple of good videos on this book that’s good to share:

A longer version with more nuances can be seen here:

So after this, you get the idea: unconscious, automatic thought patterns act out and cause irrational behaviours, sometimes at the worst possible time and place.The questions raised are, of course, interesting and important. Why do we sometimes make horrific decisions, despite having all the information available to make better ones? Why do prominent people, like George W. Bush, suffer from loss aversion, leading to billions of dollars spent and thousands of lives lost? Because it’s “too late” to pull out? Because the pain of acknowledging defeat, error or insufficiency is bigger than the benefit of sparing yet more money and lives?

Other examples can be found at the Wall Street, military, aircraft captains, and even yourself. Maybe even on a daily basis. Taken together, the examples presented in these videos and the book demonstrate that we are all susceptible to make these kinds of errors. The next and better step is, of course, to identify these errors in ourselves (and others) and act upon them in time. Coaching, anyone?

I guess I should read the book, if the publishers will send me the book 8)


Read Full Post »

Can antidepressive medicine alter your decision behaviour? A recent paper in Science now demonstrates that alterations in subjects’ serotonin levels leads to significant changes in their decision making behaviour. In the study, subjects were set to play the Ultimatum Game repeatedly. Subjects had to do the task two times at two different days, and at one of the days they were administered an acute tryptophan depletion (ATD), i.e., their serotonin levels would drop for a period of time. The design was double-blind and placebo controlled.

The Ultimatum Game is an experimental economics game in which two players interact to decide how to divide a sum of money that is given to them. The first player proposes how to divide the sum between themselves, and the second player can either accept or reject this proposal. If the second player rejects, neither player receives anything. If the second player accepts, the money is split according to the proposal. The game is played only once, and anonymously, so that reciprocation is not an issue.

What the researchers found was that the ATD led subjects to reject more offers, but only unfair offers. That is, ATD did not interact with offer size per se, and there was no change in mood, fairness judgement, basic reward processing or response inhibition. So serotonin seemed to affect aversive reactions to unfair offers.

The study is a nice illustration of how we now are learning to induce alterations in preferences and decision making. Along with other studies using, e.g., oxytocin to increase trust in economic games (see also my previous post about this experiment), one may expect that increasing the serotonin level may actually make subjects less responsive to unfair offers.

This knowledge is also important to learn more about, as it poses a wide range of ethical problems. If our preferences and decisions are really influenced by these stimuli, can this be abused? It should be mentioned that many of these substances are not necessarily detected (oxytocin is odourless), so we may be influenced without our consent or knowledge. The wide applicances could include casinos, stores (e.g. for expensive cars), dating agencies and so on. If we did not accept subliminal messages in ads, how can we accept this?


Read Full Post »

In relation to our previous and well-visited post about oxytocin, we should mention a new study that uses this very substance in a neuroeconomic set-up. In the study, recently published by Neuron, and headed by Baumgartner et al., it was found that the administration of oxytocin affected subjects’ in a trust game. In particular, it was found that subjects that received oxytocin were not affected by information about co-players that cheated. Or, as put in the abstract:

(…) subjects in the oxytocin group show no change in their trusting behavior after they learned that their trust had been breached several times while subjects receiving placebo decrease their trust.

That is extremely interesting. This suggests that oxytocin, a mammalian hormone + neurotransmitter that is known to be related to maternal behaviour and bonding, also is modulating social trust. So the brain link is obvious. But what happens in the brain when oxytocin is administered during the trust game?

This difference in trust adaptation is associated with a specific reduction in activation in the amygdala, the midbrain regions, and the dorsal striatum in subjects receiving oxytocin, suggesting that neural systems mediating fear processing (amygdala and midbrain regions) and behavioral adaptations to feedback information (dorsal striatum) modulate oxytocin’s effect on trust.

So oxytocin reduces fear and aversion responses, and this leads to the lack of effect in responding to cheaters. Excellent, why not use this for treating anxiety, phobia and other fear-related problems? Sounds promising, and yet other more ethically problematic issues remain to be resolved. Think, for example, about whether oxytocin makes us more susceptible to gambling, shopping and marketing effects? Or what if it may work as the first scientifically proven aphrodisiac? What if your next pick-up line would be “Hi, I’m Thomas, how are you” just followed by a few ‘puff-puff’ sounds.

Joke aside, studies like this demonstrates that emotions and decisions are often influenced by factors not consciously available, or at least only partially so. As the marketing industry is increasingly interested in multi-sensory inventions, oxytocin may be the next step in this endaveour.


Read Full Post »

dennett.gifDan Dennett is interviewed by Robert Wright about his views on evolution and consciousness. Their views on evolution differ, especially with Wright's contention that evolution is goal-oriented in someway, and that history progresses in a predictable direction and points toward a certain end: a world of increasing human cooperation where greed and hatred have outlived their usefulness. All in the name of evolution — and game theory, that is. IMHO it's a lot of gibberish. Evolution is not teleology. It's a gross misunderstanding of the principles of evolution. I think Dennett does a nice job at pointing this out. Wright is not all ears, though.

On the second topic, consciousness, Wright and Dennett disagree profoundly. I'm not entirely sure whether Wright takes on the job as a Devil's advocate, or if he really means that epiphenomenalism is a logical possibility. I think the latter: Wright seems totally agnostic towards Dennett's thoughts. They simply won't penetrate Wright's mind.

If you listen carefully (with earphones, like me) you'll hear Dennett make a dozen sighs along the talk. I can understand why. It's a Sisyphean task to discuss these topics, and you're bound to run into people with the same scientific agnosticism or even atheism that hinders true progress in our understanding of topics such as evolution and consciousness.

You can find the interview here.


Read Full Post »

You know you are going to receive an electric shock. But you have the opportunity to choose whether you will receive the shock immediately or a bit later. What will you choose? Given the choice, many people choose to get it over with immediately. Theories about decision assume that the choice of early shock is due to a higher cost of waiting – feeling dreadful about the coming shock.

This week in Science a team of researchers led by Greg Berns used fMRI to study how the brain works when people dread an upcoming shock (see abstract). They compared mild and extreme dreaders while making the shock choice. Adding a twist to the standard choice, subjects were to choose between an immediate but higher voltage shock or a later, milder shock. Some individuals dreaded the outcome so much that, when given a choice, they preferred to receive more voltage rather than wait.

So what about the mild dreaders versus the extreme dreaders? In their study, Burns et al. found that even if there was no choice, extreme dreaders had an increase rate of neural activity in the posterior elements of what has been previously identified as a cortical pain matrix. This is illustrated in the following figure from the article:

Caption: Effect of voltage and delay on the brain response to the shock itself.

Pain has been known to invoke a widespread network of brain regions, including the cingulate cortex. This is shown through a meta analysis of neuroimaging studies of pain and the cingulate cortex using the very useful Brede database by Finn Årup Nielsen.

Burns et al. conclude in their article:

In addition to suggesting a neurobiological substrate for the utility of dread, our results have implications for another assumption of utility theory: the origin of preferences. It seems likely that an individual's relative preference for waiting for something unpleasant derives from previous experience. In our experiment, participants presumably had well-established preferences for waiting, although it is unlikely that they had previous experience with foot shocks. We thus observed the construction of waiting preference in the specific context of foot shocks without any choices being offered. That the activity patterns in the brain regions associated with the pain experience correlate with subsequent choices offers strong evidence for the existence of intrinsic preferences. Although it is not clear how malleable these preferences are, their existence may have health implications for the way in which individuals deal with events that are known to be unpleasant—for example, going to the doctor for painful procedures. The neurobiological mechanisms governing dreading behavior may hold clues for both better pain management and improvements in public health.


Read Full Post »

Neuroscience affects the way we think about ourselves. It affects how we think of normal and abnormal minds. It has influence on how people are judged according to law, whether they have been acting willfully or under the effect of psychoactive drugs, sleep disturbance, brain injury or psychiatric disease. But how do our scientific models relate to the way neuroscience is used in the courts?

In a new article in Nature Reviews Neuroscience, Nigel Eastman and Colin Campbell are giving a critical remark on how the law system is making use of neuroscience. They claim that:

(…) there is a profound mismatch of legal and scientific constructs, as well as methods, arising from their expression of different social purposes. More specifically, in terms of the stage that brain science is currently at, the law is unlikely, at least if it fully understands the science it is being offered, to prefer population based evidence of association of violence with biological variables, be they genetic or neuroimaging in nature, to psychological evidence that can suggest, even if not prove, mental mechanisms underlying commission of a particular offence.

Law and science are not using the same language. More to the point, judging a person on the basis of evidence that has been done on groups, is highly problematic, to put it mildly. This points to the very basis of one of my earlier blog remarks that doing group studies says little about our ability to put a single subject’s scan in one group or another.

So far so good. I completely agree. But at the very end, Eastman and Campbell make a strange conclusion:

Only if science were to achieve a very high level explanation of offending in terms of genetics or brain function might the position be altered. Perhaps fortunately, it seems likely that such explanation is a long way off. Indeed, some might say that, were we to achieve such a level of biological understanding of ourselves, we would have ‘biologically explained away personhood’, and have subsumed both legal and moral responsibility into biology.

Why would an explanation of the bio-basis of personality make us have less personality? Would a biological explanation of consciousness take our feelings away? I think this is a most strange assumption and it is academic BS! Let’s put it this simple: did the explanation of stars make them in any way less star-like? No.

So what are they really claiming? If you are a naturalist, like me, you actually do think that the mind is a direct result of what happens in the brain. And nothing else. By this view, the current science of the brain is incpmplete because we have not fully understood how this works. So if we find the biological solution to personhood, we have been able to describe what happens in the brain when we are conscious, acting individuals. And it will give us the means to explain what goes wrong when someone kills another person “unmotivated”, rapes a woman, steals and lies. We already know a bit about how this works and how it can go wrong. But this knowledge makes us not one single grain less human, does it? Actually, I’d say that the insights provides us the tools to intervene when something goes wrong, and to give the best possible treatment. In that sense, neuroscience is actually humanizing.

See also this transcript from “All in the mind” at Radio National.


Read Full Post »

In the wake of my post yesterday about US government attempts to build a workable lie detector for use in the war on terror, here is an article about Jonathan Moreno, bioethics adviser for the Howard Hughes Medical Institute, who has a book coming out later this year entitled Mind Wars: National Security and the Brain. A little teaser from the article:

One of the leaders in neuroscience development is the corporation DARPA, which is currently in the process of developing a “head web,” a helmet that conducts non-invasive brain monitoring that could be used to measure brain waves while soldiers are in combat. Moreno said the government is also working on developing a “war fighter”-a human manipulated by drugs to be a more efficient soldier. The “war fighter” would require less sleep, less protein and could heal itself with the aid of drugs and technology. The war fighters would eventually be replaced by robots, which would be controlled by human soldiers in a bunker somewhere out of harm’s way. “We are probably moving to a cyborg technology,” Moreno said, and one of the first steps toward a more robotic world is the use of neurologically manipulative drugs, like the “anti-conscience pill,” which can treat stress, reduce guilt and potentially eliminate entire memories, preventing psychological conditions like post-traumatic stress disorder.

Says Moreno:

“I don’t think the government will control our brains in the old-fashioned, ‘Manchurian Candidate’ sense, but we will eventually be able to change our brains.”

I found the link to the article at the excellent Neuromarketing Blog.

Read Full Post »

Older Posts »