Archive for November, 2005

Why are some types of behaviour deemed wrong or good? Much of the philosophical work dedicated to this question has been focused on what could be called its metaphysical dimension: How can we determine if some act is good or bad by necessity, and should therefore be considered good or bad by all people? Recently, however, a growing number of researchers have begun to look into its neurocognitive dimension: How does the human brain decide whether or not a behavioural act is good or bad? Two researchers have more than anyone pioneered this approach: a philosopher-cum-psychologist, Joshua Greene, and Jorge Moll, a neuroscientist. Both have conducted a number of imaging experiments trying to illuminate which processes takes place when we make an ethical decision.

Now Moll, together with renowned neuroscientist Jordan Grafman, has published an interesting review of this research so far, which can be found in the October issue of Nature Reviews Neuroscience. Their basic proposal is that ethical decision-making is the result of the integration of processesing in three different brain systems: the prefrontal cortex, the temporal lobe, and the limbic system (and/or the reward system). They call this the “event-feature-emotion” complex. In this scheme, PFC computes event-structures (a Grafman term) and social values; the temporal lobe computes perceptual and functional features relevant for social reasoning; and the emotional system computes motive states. An example from the paper illustrates their reasoning. If you come across an orphan child, the “feature” system will inform the brain of the child’s display of sadness, and imbue knowledge of what it means to be helpless. The “event-structure” system will predict the sad future of a child living without parental support, and the “motive” system will activate an emotional response to this cognitive processing. The end result will be something like a complex conceptual and emotional integration: This child is in a state of distress; it will not survive without its parents; this situation makes me sad or angry, and I should do something to help alleviate it. It is the right thing to do.

Moll and Grafman’s model is hardly the last word on ethical decision-making. But it is exciting to see that some progress is being made in understanding how the moral brain works, seeing as the first neuroethics experiment was only published in 2001.


Read Full Post »

‘The Ethical Brain’: Mind Over Gray Matter – New York Times: ”


New York Times

TOM WOLFE was so taken with Michael S. Gazzaniga’s ”Social Brain” that not only did he send Gazzaniga a note calling it the best book on the brain ever written, he had Charlotte Simmons’s Nobel Prize-winning neuroscience professor recommend it in class. In ”The Ethical Brain,” Gazzaniga tries to make the leap from neuroscience to neuroethics and address moral predicaments raised by developments in brain science. The result is stimulating, very readable and at its most edifying when it sticks to science.

As director of the Center of Cognitive Neuroscience at Dartmouth College and indefatigable author of five previous books on the brain for the general reader alone, Gazzaniga is less interested in delivering verdicts on bioethical quandries — should we clone? tinker with our babies’ I.Q.? — than in untangling how we arrive at moral and ethical judgments in the first place.

Take the issue of raising intelligence by manipulating genes in test-tube embryos. Gazzaniga asks three questions. Is it technically possible to pick out ”intelligence genes”? If so, do those genes alone determine intelligence? And finally, is this kind of manipulation ethical? ”Most people jump to debate the final question,” he rightly laments, ”without considering the implications of the answers to the first two.” Gazzaniga’s view is that someday it will be possible to tweak personality and intelligence through genetic manipulation. But because personhood is so significantly affected by factors like peer influence and chance, which scientists can’t control, we won’t be able to make ”designer babies,” nor, he believes, will we want to.

Or consider what a ”smart pill” might do to old-fashioned sweat and toil. Gazzaniga isn’t especially worried. Neither a smart pill nor genetic manipulation will get you off the hook: enhancement might enable you to grasp connections more easily; still, the fact remains that ”becoming an expert athlete or musician takes hours of practice no matter what else you bring to the task.”

But there are ”public, social” implications. Imagine basketball stars whose shoes bear the logo not of Nike or Adidas but of Wyeth or Hoffman-La Roche, ”touting the benefits of their neuroenhancing drugs.” ”If we allow physical enhancements,” Gazzaniga argues, ”some kind of pharmaceutical arms race would ensue and the whole logic of competition would be neutralized.” Gazzaniga has no doubt that ”neuroscience will figure out how to tamper” with neurochemical and genetic processes. But, he says, ”I remain convinced that enhancers that improve motor skills are cheating, while those that help you remember where you put your car keys are fine.”

So where, as Gazzaniga asks, ”do the hard-and-fast facts of neuroscience end, and where does ethics begin?” In a chapter aptly called ”My Brain Made Me Do It,” Gazzaniga puts the reader in the jury box in the case of a hypothetical Harry and ”a horrible event.” This reader confesses impatience with illuminated brain scans routinely used to show that people ”addicted” to drugs — or food, sex, the Internet, gambling — have no control over their behavior. Refreshingly, Gazzaniga declares ”the view of human behavior offered by neuroscience is simply at odds with this idea.”

”Just as optometrists can tell us how much vision a person has (20/20 or 20/40 or 20/200) but cannot tell us when someone is legally blind,” he continues, ”brain scientists might be able to tell us what someone’s mental state or brain condition is but cannot tell us (without being arbitrary) when someone has too little control to be held responsible.”

Last year, when the United States Supreme Court heard arguments against the death penalty for juveniles, the American Medical Association and other health groups, including psychiatrists and psychologists, filed briefs arguing that children should not be treated as adults under the law because in normal brain development the frontal lobe — the region of the brain that helps curb impulses and conduct moral reasoning — of an adolescent is still immature. ”Neuroscientists should stay in the lab and let lawyers stay in the courtroom,” Gazzaniga writes.

Moving on to the provocative concept of ”brain privacy,” Gazzaniga describes brain fingerprinting — identifying brain patterns associated with lying — and cautions that just like conventional polygraph tests, these ”much more complex tests . . . are fraught with uncertainties.” He also provides perspective on the so-called bias tests increasingly used in social science and the law, like one recently described in a Washington Post Magazine article. Subjects were asked to pair images of black faces with positive or negative words (”wonderful,” ”nasty”); if they pressed a computer key to pair the black face with a positive word several milliseconds more slowly than they paired it with a negative word, bias was supposed. The unfortunate headline: ”See No Bias: Many Americans believe they are not prejudiced. Now a new test provides powerful evidence that a majority of us really are. Assuming we accept the results, what can we do about it?”

Nonsense, Gazzaniga would say. Human brains make categories based on prior experience or cultural assumptions. This is not sinister, it is normal brain function — and when experience or assumptions change, response patterns change. ”It appears that a process in the brain makes it likely that people will categorize others on the basis of race,” he writes. ”Yet this is not the same thing as being racist.” Nor have split-second reactions like these been convincingly linked to discrimination in the real world. ”Brains are automatic, rule-governed, determined devices, while people are personally responsible agents,” Gazzaniga says. ”Just as traffic is what happens when physically determined cars interact, responsibility is what happens when people interact.”

Clearly, Gazzaniga is not a member of the handwringer school, like some of his fellow members of the President’s Council on Bioethics. At the same time, his faith in our ability to regulate ourselves is touching. He notes that sex selection appears to be producing alarmingly unbalanced ratios of men to women in many countries. ”Tampering with the evolved human fabric is playing with fire,” he writes. ”Yet I also firmly believe we can handle it. . . . We humans are good at adapting to what works, what is good and beneficial, and, in the end, jettisoning the unwise.”

Gazzaniga looks to the day when neuroethics can derive ”a brain-based philosophy of life.” But ”The Ethical Brain” does not always make clear how understanding brain mechanisms can help us deal with hard questions like the status of the embryo or the virtues of prolonging life well over 100 years. And occasionally the book reads as if technical detail has been sacrificed for brevity.

A final, speculative section, ”The Nature of Moral Beliefs and the Concept of Universal Ethics,” explores whether there is ”an innate human moral sense.” The theories of evolutionary psychology point out, Gazzaniga notes, that ”moral reasoning is good for human survival,” and social science has concluded that human societies almost universally share rules against incest and murder while valuing family loyalty and truth telling. ”We must commit ourselves to the view that a universal ethics is possible,” he concludes. But is such a commitment important if, as his discussion suggests, we are guided by a universal moral compass?

Still, ”The Ethical Brain” provides us with cautions — prominent among them that ”neuroscience will never find the brain correlate of responsibility, because that is something we ascribe to humans — to people — not to brains. It is a moral value we demand of our fellow, rule-following human beings.” This statement — coming as it does from so eminent a neuroscientist — is a cultural contribution in itself.

Sally Satel is a psychiatrist and resident scholar at the American Enterprise Institute and a co-author of ”One Nation Under Therapy: How the Helping Culture Is Eroding Self-Reliance.”

Taken from NYTimes

Read Full Post »

Finally, there is a blog on neuroethics. And it seems that it is not only a buzzword blog: it is initiated by prof. Adam Kolber from the Un. of San Diego School of Law.

As Kolber writes about this blog; “The Neuroethics and Law Blog is an interdisciplinary forum for legal and ethical issues related to the brain and cognition. It is meant to be of interest to bioethicists, legal academics, lawyers, neuroscientists, neurologists, cognitive scientists, psychologists, psychiatrists, philosophers, criminologists, behavioral economists, and others.”

So, does that not include the most of us academics dealing with humans? I would think that the top stories from this blog would also make “regular” people discuss.

So what are the latest developments in neuroethics? For formal publications, Martha Farah has just published an article called “Neuroethics: a guide for the perplexed“. In here, she touches upon one of the most interesting views in my opinion: if a “naturalist” account of the mind, i.e. conscious and unconscious processes, is correct, it would have a tremendous impact on our self-awareness, and consequences for ethicsa and law.

Another recent development, although many years in the making, is the combination of genetics and neuroimaging techniques. This is indeed a hot topic in human brain mapping science. Of course, it has been known for a long time that genes are the “building blocks” of proteins that e.g. regulate uptake of a certain neurotransmitter. But the new idea is to demonstrate that certain genes that are polymorph, i.e. they have a “natural variation” in healthy individuals, have a significant impact on neural function. Several recent studies by Weinbeger, Hariri and colleagues demonstrate that even among normals, genes can explain different responses of the brain For example, they have shown that individual differences in the response of the amygdala to emotional pictures are explained by their “genetic makeup”.

Should we think further from these findings, we might very well end up with people being gene tested for their potential for being cynical soliders, executive and effective business leaders, empathic caregivers …(fill in your favourite).

Read Full Post »

A story in NewsWise tells about new research showing that the brain does more than we are aware of. Well, is THAT so surprising? No. We know that in priming studies, words flashed too briefly to be detected, nevertheless influence people’s subsequent choices. For example, if the word “king” was flashed below conscious threshold, you would likely choose the word “queen” instead of “farmer” afterwords, even though you were not aware of why you did so.

Anyway, this cited study is still important. It pinpoints brain mechanisms that underlie subliminal perception.


Thunderstorm clouds ominously darken the horizon. We nonetheless go out without an umbrella because we are distracted and forget. But do we? Neurobiologists carried out experiments that prove for the first time that the brain remembers, even if we don’t and the umbrella stays behind.

We often make unwise choices although we should know better. Thunderstorm clouds ominously darken the horizon. We nonetheless go out without an umbrella because we are distracted and forget. But do we? Neurobiologists at the Salk Institute for Biological Studies carried out experiments that prove for the first time that the brain remembers, even if we don’t and the umbrella stays behind. They report their findings in the Oct. 20th issue of Neuron.

“For the first time, we can a look at the brain activity of a rhesus monkey and infer what the animal knows,” says lead investigator Thomas D. Albright, director of the Vision Center Laboratory.

First author Adam Messinger, a former graduate student in Albright’s lab and now a post-doctoral researcher at the National Institute of Mental Health in Bethesda, Md. compares it to subliminal knowledge. It is there, even if doesn’t enter our consciousness.

“You know you’ve met the wife of your work colleague but you can’t recall her face,” he gives as an example.

Human memory relies mostly on association; when we try to retrieve information, one thing reminds us of another, which reminds us of yet another, and so on. Naturally, neurobiologists are putting a lot of effort into trying to understand how associative memory works.

One way to study associative memory is to train rhesus monkeys to remember arbitrary pairs of symbols. After being shown the first symbol (i.e. dark clouds) they are presented with two symbols, from which they have to pick the one that has been associated with the initial cue (i.e. umbrella). The reward is a sip of their favorite fruit juice.

“We want the monkeys to behave perfectly on these tests, but one of them made a lot of errors,” recalls Albright. “We wondered what happened in the brain when the monkeys made the wrong choice, although they had apparently learned the right pairing of the symbols.”

So, while the monkeys tried to remember the associations and made their error-prone choices, the scientists observed signals from the nerve cells in a special area of the brain called the “inferior temporal cortex” (ITC). This area is known to be critical for visual pattern recognition and for storage of this type of memory.

When Albright and his team analyzed the activity patterns of brain cells in the ITC, they could trace about a quarter of the activity to the monkey’s behavioral choice. But more than 50 percent of active nerve cells belonged to a novel class of nerve cells or neurons, which the researchers believe represents the memory of the correct pairing of cue and associated symbol. Surprisingly, these brain cells kept firing even when the monkeys picked the wrong symbol.

“In this sense, the cells ‘knew’ more than the monkeys let on in their behavior,” says Albright.

And although behavioral performance is generally accepted to reliably reflect knowledge, in fact, behavior is heavily influenced – in the laboratory and in the real world – by other factors, such as motivation, attention and environmental distractions.

“Thus behavior may vary, but knowledge endures,” concluded Albright, Messinger and their co-authors in their Neuron paper. The other co-authors are Larry R. Squire, a professor in the Department of Psychiatry at the UCSD School of Medicine, and Stuart M. Zola, director of the Yerkes National Primate Research Center in Atlanta.

The Salk Institute for Biological Studies in La Jolla, California, is an independent nonprofit organization dedicated to fundamental discoveries in the life sciences, the improvement of human health and the training of future generations of researchers. Jonas Salk, M.D., whose polio vaccine all but eradicated the crippling disease poliomyelitis in 1955, opened the Institute in 1965 with a gift of land from the City of San Diego and the financial support of the March of Dimes.

© 2005 Newswise. All Rights Reserved.

Read Full Post »

This story dropped in through my Google Alerts the other day (not Dilbert). Are we really going to get any further in explaining free will, whatever that is?

When teaching students and giving talks touching this topic, I often must ask: what is free will free from? We know that as biological beings, we are bound by the physical forces of nature; as social beings our behaviours are constrained by our social milieu. Our choices are influenced by moods and unconscious processes (such as priming). So what to we really mean by “free”?

This article also discusses some of the ethical implications of neuroscience — yes, neuroethics again-again. Take this as a hint that brain science is beginning to have an impact on human thought and introspection, even everyday thought and talk.


Does Neuroscience Refute Free Will?

This is the excellent foppery of the world, that, when we are sick in fortune, — often the surfeit of our own behavior, — we make guilty of our disasters the sun, the moon, and the stars; as if we were villains on necessity, fools by heavenly compulsion, knaves, thieves, and treachers by spherical predominance, drunkards, liars, and adulterers by an enforc’d obedience of planetary influence, and all that we are evil in, by a divine thrusting on. –William Shakespeare

In the above quote from King Lear we find a description of those who, throughout human history, deny free will and personal responsibility, instead blaming their wrongdoings on interventions divine and planetary. In a recent article, Joshua Greene and Jonathan Cohen join the believers in the “divine thrusting on.”[1] This being the scientific age, and our authors being card-carrying neuroscientists, the divine thrusting on becomes a neuroscientific thrusting on, the brain playing the role of the stars above.

The divine thrust of their argument is that we have no free will because there is neuroscience, though our laws have yet to take this into account:

… the law’s intuitive support is ultimately grounded in a metaphysically overambitious, libertarian notion of free will that is threatened by determinism and, more pointedly, by forthcoming cognitive neuroscience…. The net effect of this influx of scientific information will be a rejection of free will as it is ordinarily conceived, with important ramifications for the law.[2]

What are these ramifications? To begin with, the concept of personal responsibility is obsolete. Since all actions are determined by the “preexisting state of the universe,” we have no choice in the matter. As they put it: “Given a set of prior conditions in the universe and a set of physical laws that completely govern the way the universe evolves, there is only one way that things can actually proceed.” Thus we can logically trace everything back to the Big Bang that blasted the universe into existence. Should you ask why I had bagels rather than bananas for breakfast this morning, for example, I can refer you to the Big Bang theory of human action.

But if there is already the Big Bang, why do we need neuroscience to reveal our lack of free will? According to Greene and Cohen, for ages “scientific” philosophers, i.e. philosophers of their determinist camp, had argued against free will, but because the mind was then a black box, it was easy for the deluded religious people, the soft humanists, and other dim-witted souls to cling to the illusion of free will.

Now that we have neuroscience, however, the mind is a black box no more — it is high time for the rest of us to wake up from our dogmatic slumber and smell the deterministic universe. In short, while the Big Bang provides the big picture, neuroscience supplies the details, which will make it abundantly clear, even to the lay public, that we are literally puppets in a deterministic universe after all.

Blame it on the brain
Greene and Cohen argue that our brains are responsible for all our behaviors. Our “brains” commit crimes. “We” are innocent. Thus, in their words, “demonstrating that there is a brain basis for adolescents’ misdeeds allows us to blame adolescents’ brains instead of the adolescents themselves.” It is fortunate that the boys in the neighborhood have not read their article, for here is their new defense after damaging your property: I didn’t do it, it was my brain!

Although it has been known even before Plato that the brain plays a central role in behavior, this particular argument is rather novel. One reason others have not been bold enough to advance it (despite a perennially strong demand for determinism) is that it contains a glaring category error. Greene and Cohen compare two opposing sources of agency — either your brain or you — as if they are mutually exclusive, as if without your brain you would still be a moral agent.

As a result of this error, Greene and Cohen conclude, “the idea of distinguishing the truly, deeply guilty from those who are merely victims of neuronal circumstances will, we submit, seem pointless.”

But the moral agent in the legal sense is the whole package — you consisting of your brain and the rest. To say that we are victims of neuronal circumstances is to say that we are victims of ourselves. The underlying assumption is that we have no control over “neuronal circumstances,” just as we have no control over “external circumstances.” But this assumption (a newly bottled behaviorist assumption) entirely contradicts our knowledge that the brain is a self-organizing and self-regulating biological system, not merely a step in the transformation of some external stimulus to behavioral output.

It is, however, not necessary to discuss in any detail the brain as a control system in order to refute Greene and Cohen, for their argument is not based on any understanding of the brain at all. It boils down to the primitive logic that, for example, if I stole your wallet then my hand is to be chopped off.

Mr. Puppet
To their credit, Greene and Cohen sensed that blaming everything on the brain is not enough. They have another weapon in store for free will, yet another “thought experiment.” For their strategy is to generate as many arguments as they can against free will, hoping that some of them will have done the damage, even if these arguments contradict each other.

In their second strike, they urge us to imagine the case of a “Mr. Puppet,” a criminal designed by a group of scientists through tight genetic and environmental control. During Mr. Puppet’s trial, the lead scientist is called to the stand by the defense. And here is what Greene and Cohen had him say:

I designed him. I carefully selected every gene in his body and carefully scripted every significant event in his life so that he would become precisely what he is today. I selected his mother knowing that she would let him cry for hours and hours before picking him up. I carefully selected each of his relatives, teachers, friends,enemies, etc., and told them exactly what to say to him and how to treat him. Things generally went as planned, but not always. For example, the angry letters written to his dead father were not supposed to appear until he was fourteen, but by the end of his thirteenth year he had already written four of them. In retrospect I think this was because of a handful of substitution I made to his eighth chromosome.

Of course, a change in the chromosome cannot determine the timing of a nasty letter written, since the genome does not contain information that specifies any of our actions. The environmental regulation, too, is impossible, except in science fiction. But plausibility or knowledge of basic biology is not to be expected from our authors. Greene and Cohen believe that Mr. Puppet is not responsible for his actions, because “forces beyond his control played a dominant role in causing him to commit the crimes, it is hard to think of him as anything more than a pawn.”

But even if we assume, for the sake of argument, that such a person could be so designed, we might conclude that he is indeed a puppet of the scientist-designer, while we are not puppets of this sort. Our genes are not selected, nor our environment scripted, by anyone.

Not surprisingly, however, Greene and Cohen reach a rather different conclusion:

What is the difference between Mr. Puppet and anyone else accused of a crime? After all, we have little reason to doubt that (i) the state of the universe 10,000 years ago, (ii) the laws of physics, and (iii) the outcomes of random quantum mechanical events are together sufficient to determine everything that happens nowadays, including our own actions. These things are all clearly beyond our control. So what is the real difference between us and Mr. Puppet? … in a very real sense, we are all puppets. The combined effects of genes and environment determine all of our actions. Mr. Puppet is exceptional only in that the intentions of other humans lie behind his genes and environment. But, so long as his genes and environment are intrinsically comparable to those of ordinary people, this does not really matter. We are no more free than he is.

In an apparent slip, they acknowledged that the scientists had intentions, that they deliberately acted in designing Mr. Puppet. Their actions apparently differ from causes that are not human actions. Greene and Cohen never bothered to ask whether these scientists ought to be punished for specifically designed someone to commit crimes, whether they are responsible at all. But if we are forced to accept this scenario, then the responsibility for the crimes appears to lie with the scientists — for designing puppet criminals.

According to Green and Cohen, however, Mr. Puppet’s genes and environment are “intrinsically comparable” to those of ordinary people, as if we all live in a designed environment, in which people deliberately abuse us and lie to us; as if our genes, rather than the results of natural selection, are picked by a team of evil scientists. Intrinsically comparable? By that they presumably mean that the environment is still a earthly environment like ours, the same house with furniture and TV and parents, and so on, and the genes are still stretches of DNA made up of garden-variety nucleotides.

But clearly these “intrinsic” features are irrelevant in Mr. Puppet’s case. His genes and environment, after all, are designed to make him a criminal. But note, in particular, Greene and Cohen’s peculiar emphasis on the combination of genes and environment. Biology, of courses, tells us there are additional factors which are neither genetic nor environmental, but we can safely assume that these authors, possessing no particular interest in the science of biology, are not aware of these.

Being metaphorical scientists, by “genes and environment” they mean everything that makes us who we are, everything that determines our actions. We are now ready to translate their claim into plain English: Everything that determines who we are determines who we are; everything that determines our actions determines our actions. Surely we do not have control over everything — Greene and Cohen correctly assume. And surely all possible factors combined determine our actions. But while reaching such a brilliant conclusion they have spun their minds out of control, ignoring the circularity in the process. We are compelled by the laws of logic to agree with them: Yes, a banana is a banana.

Illusion of free will
Having thus disposed of free will, Greene and Cohen are ready to explain why we think we have such a thing. If we think we have something that doesn’t exist, then that something must be an illusion. Hence their claim that the brain generates the illusion of free will to fool us into thinking we are in control.

With becoming modesty, our authors compare themselves to Copernicus, Darwin, and Freud in overthrowing human narcissism. Copernicus removes the earth as the center of the universe, Darwin removes human beings as lords of the earth, and Freud removes consciousness as the sole determinant of human behavior. Here comes another blow beneath the belt — even what little conscious control you have over your action is an illusion.

It seems to me, however, that this is a case of sadomasochism. Green and Cohen appear to derive keen delight out of wounding human narcissism, as represented by free-will folk psychology. You thought you decided to read this article because it seemed interesting. But no, you have no clue, and that thought was really just some illusion generated by your brain to mask its cluelessness.

How much insult to your narcissism can you take? That is the question, on which your scientistic manhood depends. Only tough scientists like Greene and Cohen are brave enough to take determinism straight, without illusions. And if you don’t think you are a puppet yet, they will beat you into submission with their thought experiments and imagined data until you give up your selfhood. And so the game goes on.

Although I do not wish to deny the multitudinous pleasures derived by Greene and Cohen from becoming puppets of metaphysical fiction and mouthpieces of pseudoscientific rant, I do wish to examine the evidence they present for their claims.

For such evidence Greene and Cohen rely on the work of Daniel Wegner,[3] a Harvard psychologist and a fellow metaphorical scientist. According to Wegner, our actions are not caused by our willing. In support of this claim he cites evidence that hypnosis or brain damage can impair our sense of free will, that various experimental manipulations can create in us the illusion of control.

Our immediate response is: So what? We will not have free will if our heads are cut off. We will not have free will if we are asleep. Sometimes we erroneously thought we caused something to occur when, in fact, we did not. From this, however, it does not follow that free will is an illusion.

Under hypnosis, for example, we might feel that our arm was raised even though we did not will it. Likewise, when our motor cortex or our muscle is stimulated, various movements might be induced which are not willed. For Wegner, however, this sense of “it just happens” is a more accurate description of what really happens when we act. It never occurred to him that there is no experience of will because these are not instances of voluntary actions under our control.

Wegner very much prefers this sense of passivity, for only then do we feel like inanimate objects. When my arm uncontrollably does something, it is acting as a “scientific object” should, like a brick. Our free will must be an illusion because it does not fit into Wegner’s scientific understanding of the world.

The philosopher Daniel Dennett believes that, for the sake of convenience, we adopt the “intentional stance” when interpreting the behavior of other human beings. Wegner’s position can be described as the “passivity stance.” He prefers to feel like the hypnotic subject, the brain damaged patient, or a zombie in general, because, according to his scientific Weltanschauung, the passivity stance is a more accurate reflection of reality. But the question remains as to whether Wegner, or the average man on the street, is actually delusional.

Agreeing with Wegner’s claim that our sense of free will is an illusion, Greene and Cohen go one step further, and argue that our attribution of free will in others is also an illusion.

They cite a study by Heberlein et al., who presented the following film to human subjects: a big triangle chases a little circle around the screen, bumping into it. The little circle repeatedly moves away, and a little triangle repeatedly moves in between the circle and the big triangle. When normal people watch this movie they see these interactions in social and intentional terms. The big triangle tries to harm the little circle, and the little triangle tries to protect the little circle.

However, a patient with damage to the amygdala, an almond-shaped collection of different brain structures, fails to see these shapes in such intentional terms.[4] Consequently, for Greene and Cohen, because this attribution of free will is generated by a brain area, it is also an illusion.

Readers of my earlier article will be familiar with Greene and Cohen’s penchant for evolutionary speculation. Here they go again. According to their new so-so story, parts of the brain, in the course of evolution, become specialized modules for folk psychology, e.g. attributing free will to others; other parts, for folk physics, e.g. believing in the sort of motion typically seen in a Disney cartoon. We know that folk physics is wrong, but folk psychology is just as wrong, according to Greene and Cohen. Because of our folk psychology system, we think of other animate objects as uncaused causers. But after learning neuroscience, “when we look at people as physical systems, we cannot see them as any more blameworthy or praiseworthy than bricks.”

Perhaps we could posit, in addition to the folk physics system and the folk psychology system, a third system of masochistic scientism, which fools one into believing one is a brick being acted on by the forces of nature, rather than an acting agent responsible for his actions. The neural basis of this third system, I submit, remains to be established.

To summarize then, our brick-minded theorists accuse folk psychology behind the law to view moral agents as uncaused causers. Since we are not uncaused causers, we cannot be moral agents, and we cannot be responsible for our actions.

Now if I am an uncaused causer, my actions are insulated from any external influence. Suppose a man is given a life sentence for killing the guard while robbing a bank, such a punishment cannot possibly prevent me from doing the same. Deterrence is indeed impossible if I am the uncaused causer of my actions.

However, this cannot possibly be the assumption behind our law, because it cannot possibly explain the law’s focus on intentionality. According to the folk psychology that Greene and Cohen attack relentlessly, it is characteristically human that we deliberately choose appropriate means to reach desired ends. This capacity enables us to become moral agents, the targets of praise or blame. For example, an act is not guilty lest there be a guilty mind (Reum non facit nisi mens rea).

As Mises repeatedly pointed out, the very concept of human action, of means and ends, presupposes the category of causality. Responsibility does not imply that we are unmoved movers in the Aristotelian sense, standing outside of the chain of cause and effect, but that we, as agents of intentional actions, are in a peculiar position in a long chain of causes stretching back to the Big Bang. We are agents capable of controlling our actions, not reflex-arcs translating stimulus into response.

The law, then punishes crimes that are the result of deliberation and willing, and is lenient towards accidents or those agents incapable of rational actions (e.g. children). This selectivity can only be based on the idea of deterrence. For it would be absurd to tell someone not to murder, if he could not help it, just as it is absurd to tell someone to stop beating his heart.

The law, instead, punishes crimes that result from actions that we can control, and could thus prevent such actions in the future.

If the law is in fact based on the assumption of uncaused causers, it would have no reason to make distinctions between deliberate murdering and accidental killing. Strict liability would apply to all crimes. It is of course beyond the scope of this article to discuss the history of the law, though it should be pointed out that the concept of personal responsibility in violent crimes is in fact a relatively recent development. Strict liability, extending to relatives and Lords, is common in many primitive societies (I refer the interested reader to Pollack and Maitland’s masterpiece or Zane’s book on the history of law).

Law and liberty
Free will, in the sense discussed here, means that humans control what they do. Neuroscience will not change this fact. Science fiction, of the variety favored by Greene and Cohen, could always imagine such a day. In this sense, it cannot be distinguished from any teleological religion.

Indeed, determinism of this type, which claims that human beings do not choose, do not act, but are always acted upon, has been revived innumerable times in history, in various guises. It is a historical fact that primitive savages, religious fanatics, and believers in inexorable laws of history have always advocated some version of it.

In the development of the law also, the concept of personal responsibility evolved, partly because some human beings, after struggling free from superstition and the “passive stance,” began to understand the nature of their own actions and their effects on the world. Enlightened individualism, we should remember, was a late development, and remains unpopular in many parts of the world today. The intuitive folk psychology of human action we possess is the product of such enlightenment.

On the other hand, in attacking the concept of free will and personal responsibility, Green and Cohen merely revive the cult of irrational thought that has long prevailed in human societies. It should not therefore surprise us to find in their article the following sentence: “rationality is just a presumed correlate of what most people really care about.” Indeed, what is left of rationality when you are not responsible for your actions?

In place of reason, these authors substitute aggregate welfare. The law reformed in light of neuroscientific knowledge should, according to them, aim to promote future welfare, rather than punish those responsible for their crimes. In an earlier article I discussed the attempt by these authors to abolish universal moral norms using brain imaging data, in the name of aggregate welfare. We should at least applaud their consistency. Of course, a universal moral norm such as the Categorical Imperative would have no meaning if there is no free will. Why tell someone not to steal if he could not help it, if his brain was to blame?

All considered, then, their arguments boil down to this: (1) the criminal is not responsible for his crime because everything that determines who he is determines who he is; (2) instead of punishing criminals for what they deserve, the law should maximize future welfare.

Ethically, it seems preposterous to argue against the total welfare of mankind, just as logically, it is impossible to refute a tautology. The take-home lesson here is that you should always watch out for someone who argues for something that cannot possibly be contradicted, for there is often a hidden agenda, attached to the can’t-possibly-be-wrong package, that triggers the self-destruction of the whole thing once uncovered.

As I pointed out in the earlier article, their concept of aggregate welfare is a vacuous concept, made up for the sake of convenience. We cannot possibly calculate what this welfare is, though we can indirectly observe, by studying history, the long-term effects of certain rules and practices on groups that follow them. In the latter, somewhat more concrete sense of welfare, our current legal framework appears to have been one of the chief promoters of human welfare, judging by the remarkable spread of the relevant ideas from the West, against often strong resistance from local customs and primitive practices.
What we can and cannot know: $16
Finally, throughout their essay, Greene and Cohen emphasize that the “libertarian” conception of free will which they attack has no connection to the political philosophy. This disclaimer, however, betrays ignorance of the political philosophy. Free will and responsibility provide the necessary foundation of the libertarian political philosophy. Laws protect liberty, and liberty entails responsibility.

Their arguments for determinism are yet another attempt to abolish laws as abstract rules that apply to everyone equally. Instead the State and its “scientific experts” will get to decide whether a person will be harmful to society or not, in order to maximize future welfare in each case (i.e. to do whatever those in power wish to do). The law itself becomes meaningless. And instead of being general rules that protect individual liberty, in the hands of Greene and Cohen, and in the name of neuroscience, it will be used, as a tool for state intervention and arbitrary judgments, to destroy liberty.

Lucretius is a neurobiologist living in Maryland. Email. He will read the blog and answer comments there. Read his first article: Does Neuroscience Refute Ethics?

1. Greene, J. & Cohen, J. “For the law, neuroscience changes nothing and everything.” Philos Trans R Soc Lond B Biol Sci 359, 1775-85 (2004).

2. von Mises, L. Theory and History (Mises Institute, 1957).

3. Wegner, D. M. Precis of the illusion of conscious will. Behav Brain Sci 27, 649-59; discussion 659-92 (2004).

4. Heberlein, A. S. & Adolphs, R. Impaired spontaneous anthropomorphizing despite intact perception and social knowledge. Proc Natl Acad Sci U S A 101, 7487-91 (2004).

Read Full Post »

« Newer Posts