Archive for August, 2006

greatimage.jpgA core question in the study of personality and intelligence is about the relative contribution of inherited traits and learned behaviours. How much nature vs how much nurture shapes the mind? This problem can be approached in several ways, including the study of monozygotic twins that have been reared apart, and adoption studies (e.g. comparing the IQ of the biological vs. the adoptive parents to assess genetic vs. environmental effects respectively). However, on rare occasions we get the opportunity to study extreme cases of how nature and nurture influences the human mind. Such occasions include children that have grown up without (or with very little) influence from other people.

Such stories are at the same time shockingly disturbing and at the same time truly amazing, and they provide us with insights into the development of the human mind. It has demonstrated, for example, how there is a critical period for language acquisition.

What happens to a mind that is deprived from cultural influence during childhood? This is the ultimate test of the nature-nurture debate. But it is also the ultimate non-ethical experiment. Nevertheless, on rare occasions science has been provided with cases where children have been abandoned or seriously socially deprived. We don’t have to go father than last week’s story about Natascha Kampusch, the now 18 year old woman who was kidnapped eight years ago and kept in a cramped, windowless underground bedroom during her captivity, and isolated from interactions with any other human beings. Kampusch still had interaction with her kidnapper, but she has spent her entire adolescence in social isolation, a period of development that involves the development of social skills and personality.

In Boston Review, Rebecca Saxe who is an MIT psychologist, has an excellent review of Encounters with Wild Children by Adriana S. Benzaquén. The book is about the history of scientific studies of “wild children”, or as the publisher write:

Since the early seventeenth century, stories of encounters with strange children in unusual circumstances have been recorded, circulated, and reproduced in Europe and North America not simply as myths, legends, or good tabloid copy but as occurrences deserving serious scrutiny by philosophers and scientists. “Wild children” were seen as privileged objects of knowledge, believed to hold answers to fundamental questions about the boundaries of the human, the character and significance of civilization, and the relation between nature and culture, heredity and environment.

The study of these “wild children” have thus been thought of as a genuine path to study the influence of culture (or, rather, lack thereof) on the developing mind. But as Benzaquèn argues, we should not give these studies this high rank. As Saxe writes:

But here’s the catch: the forbidden experiment may belong to a smaller group of experimental problems that persistently seem meaningful but are not. Intuitively, we expect that while human nature interacts with human society in a typical child’s development, the natural and the social are in principle independent and distinguishable. If this intuition is wrong, the forbidden experiment is incoherent. In fact, the social and the natural may be irretrievably entangled in development. In part this is because a social environment that includes other human beings is inevitably more natural for a human infant than any wholly artificial environment that could be constructed to replace it. Even the unfolding of innately determined human traits relies on a social environment. For example, virtually every human infant is exposed to a language and learns it; an infant who was never exposed to any language could not possibly speak one. Yet it is the children who do learn a language—through social interactions—who illustrate the natural human capacity.

So although we might be interested in the psychological profile and story that may eventually come from the Natascha Kampusch story, it should definitely be taken with more than a grain of salt.


Read Full Post »

Exercising your mind

runningbrain.jpgMany of us are doing crosswords, soduko or whatever kind of training to keep mentally in shape. However, it seems that at least some of the time we would be better off putting on our running shoes and start walking or running. According to a number of studies, mental health is strengthened by physical training. There are today also indications that running can delay the onset of degenerative disease such as Alzheimer’s (see here).

So although you can’t train your brain like a muscle — it doesn’t work that way — training what you normally think of as physical exercise actually helps your brain, too. A number of studies have focused on the effects on cognition and other behavioural measures, while more recent studies also focus on direct measurable effects on the brain’s structure and function. For example, in a study by Colcombe et al. (2003) physical exercise was found to reduce age-related brain decline. This can be shown both for gray and white matter:


LEFT: regions that are affected — i.e. shrink — during ageing. RIGHT: regions that show preservation as a function of better cardiovascular health (i.e. training).

So it should be a matter of just going out there and start running, right? Not so, according to a recent review by Kiraly and Kiraly. Reviewing the literature on mammalian and human research on this topic, the authors first note factors that have negative effects on the brain:

The cascade of cellular damages from oxidative stress, nitrosative stress and gluco-corticoid effects are cumulative and age related. (…) Lack of exercise and motility restrictions are associated with increased vulnerability from oxidative stress, nitrosative stress and glucocorticoid excesses, all of which precede amyloid deposition and are fundamental in the cascade of events resulting in neuronal degradation, especially in the hippocampi.

Contary to this, exercise has a postive influence on the brain:

Exercise training reduces oxidative stress, nitro-sative stress and improves neuroendocrine autoregulation which counteracts damages from stress- and age-related neuronal degeneration, brain ischemia and traumatic brain injury. People prone to chronic distress, brain ischemia, brain trauma, and the aged are at increased risk for neurodegenerative diseases such as Alzheimer’s. Exercise training may be a major protective factor but without clinical guidelines, its prescription and success with treatment adherence remain elusive.

I find that very last part very interesting. It’s obviously not just going out there and start running. You’d better do your training properly, and possibly with the help from a professional trainer.


Read Full Post »

mtl.jpgTake any textbook on cognitive neuroscience. If you go through the book you wil see that there are chapters on perception (e.g. vision), memory, and language. Each chapter has its own vocabulary, theories and experimental evidence. Each chapter may even have been written by different authors (i.e. authorities).

Once reading such a book you will have knowledge about how visual input is processed from the initial steps in the retina, through the thalamic nuclei, and in the visual cortex, just as well as you will learn that as you perceive something as an object you will make use of areas in the temporal lobe, including the fusiform gyrus. You will have learned that memory — especially episodic and semanic memory — is a result of activity occurring in the medial temporal lobe, and especially hippocampus. You will know that theories of language and semantics point to the temporal lobe as important for its functioning.

All in all, you have a nice impression of how the brain is responsible for different perceptual and cognitive functions. But think now of the three examples: they all seem to imply the temporal lobe as important for their functioning. So does this mean that visual perception, memory and language resides in different, non-overlapping parts of the temporal lobe? If so, how do these areas or modules communicate with each other?. What is the lingua franca of neurons comunicating information fro the visual senses to memory and semantics? Add on top of this that parts of the temporal lobe has been implemented in many other functions, including hearing (e.g. the planum temporale and Heschl’s gyrus) and odour processing (e.g. the entorhinal cortex). How does this combine with the other functions? Should we see the temporal lobe as a patchwork of distinct and neatly segregated functions?

For a long time the predominating view of the temporal lobe has been a strictly modular one: one part of the lobe processes visual input, there are language and memory modules. Non-overlapping parts of a lobe that are tuned to process one kind – but not other kinds – of information.

But this view is changing dramatically. Today, following researchers such as Elisabeth Murray, David Gaffan and others (especially from the universities of Cambridge and Oxford, UK) the standard view of temporal lobe function is changing. Instead of a functionally segregated model of the temporal lobe, these researchers now suggest that the lobe has an entirely different way of functioning. In this area, often referred to as the medial temporal lobe one has now documented not only multiple cognitive functions in a brain area once thought to be dedicated to memory, but also redundancy between the structures. Some examples:

  1. There is a functional specialization within the rhinal cortices beyond the involvement in memory: the entorhinal cortex is involved in odour perception as well as multi-modal conjunct perception, i.e. the perception of the entirety of a scene, including sights, sounds and more. The perirhinal cortex is involved in novelty processing, higher-order visual conjunct perception and discrimination, as well as high-specificity semantic processing.
  2. Specific and small anatomical regions are involved in different cognitive functions. For example, the perirhinal cortex has been shown to be involved in memory processes (particularly visual object encoding, but also other forms), novelty processing, semantic processing and high-order visual perception and discrimination

While point 1 does not conflict with a modular view of the brain-mind, point 2 poses a serious problem to any modularist view of the human mind and brain. In many respects, findings now converge on a view of the brain that stresses functional redundancy and degeneracy. In other words: A) one structure can participate in many different functions; and B) many structures are necessary parts of any given cognitive function. A mapping of a 1:1 relationship between a cognitive function and its wetware is thus unsupported by today’s knowledge.

So take that cog-neurosci textbook again, scroll through its pages and ask yourself: how are these cognitive functions connected? Better still, take the chapter to your supervisor, lecturer or whoever you want and ask: “how does the temporal lobe deal with memory, language, visual perception and other multi-modal operations, and how are these processes tied together?” It would be interesting to hear the replies you get.


Read Full Post »

Neuroethics is an inherently historical field, although this is not always pointed out: the cases and experiments discussed by the neuroethics literature took place sometime in the past, and in many cases it might be beneficial to also consider the historical context when examining them. What were the theoretical assumptions that informed the experiments? What social and political factors surrounded some specific case story? An interesting example of this strategy is three new papers on “Neurosciences and the Third Reich” published in the September issue of Journal of the History of Neuroscience. As Axel Karenberg notes in an accompaning editorial: “What and how much can we learn from the past to help us better plan for the future? Every interaction with Nazism forces us to confront this question.”

The first of the three papers, Florian Steger’s essay “Neuropathological research at the ‘Deutsche Forschungsanstalt für Psychiatrie’ in Munich…”, examines how German scientists took advantage of the Nazi regime’s euthanasia policies to conduct unethical experiments on people, often children, who were later killed.

The second, Florian Schmaltz‘s paper “Neurosciences and Research on Chemical Weapons of Mass Destruction in Nazi Germany”, looks at the neurochemical weapons development program started and later discontinued in 1942. While the resulting nerve gasses were never widely used during World War II, the German researchers responsible for developing them were later, after the war, recruited by the allied forces.

And the third, an essay by Jürgen Pfeiffer entitled “Phases in the postwar German reception of the ‘Euthanasia Program’ (1939-1945)…”, explores how Germany after the war refused to deal with this dark chapter of German neuroscience and medical research.

At the end of his editorial Alex Karenberg ponders what ethical lessons can be extracted from examining the history of neuroscience in the Third Reich and draws three conclusions which I here quote:

1: Our view of the natural sciences has changed since the development of atomic weapons and the Nuremberg Trials. “Pure science” — the search for truth in ivory towers — no longer exists. Scientists today bear a moral responsibility for consequences and aberrations. The history of medicine in the Third Reich made abundantly clear how dependent medical research is on political pressure, social expectations, and financial considerations.
This is as true today as it was then, even when the circumstances have changed. The task of medical historians is to make this clear to current and future students and scientists.

2: Nazi medicine makes equally clear that it is not a question of good versus bad research — but that medical knowledge is always in conflict with ethical values. Ever since medicine became a true natural science, this union has had an inherently destructive potential. Where and whenever the desire for scientific progress dominates and is made superior to all other moral values, that is where and when the “dark side” of medicine will be found: in the Third Reich, in other totalitarian regimes, and even in democratic states lacking strong ethics. Inhuman and inhumane progress is possible in any kind of research when the only goal is the acquisition of knowledge. This is and shall remain true.

3: Finally, writing history is not the same as personal memory. History is constructed memory based on dates and facts. Personal memory is subjective and emotional. Society and above all science needs both forms of cultural remembrance in order to fully understand itself. This is especially the case for German medicine when recalling those crimes committed in the name of science. I would thus like to conclude this introduction with two citations dealing with the power of self-criticism and memory. The first is from the Book of KingsI (19:4): “I am no better than my ancestors.” The second is from a speech given by former German President Richard von Weizsäcker: “The secret of reconciliation is remembering.”

It would certainly be interesting, in the future, to see a more direct integration of historical research with neuroethics discussions.


Read Full Post »

Marc Hauser has new book out (Moral Minds, HarperCollins, 2006) where he argues that we should analyze moral cognition as a “universal grammar” along the lines of Chomsky’s programme for linguistics. I haven’t had the opportunity to read Hauser’s book yet, so I will refrain from commenting on it until I have (hint, hint, HaperCollins!). However, Richard Rorty has read it and reviews it in today’s New York Times Sunday Book Review. He raises a number of obvious questions – what qualifies as moral cognition?, what can we learn about moral behaviour from studying the brain?, etc. – which it would interesting to see Hauser answer.


Read Full Post »

brainimagingbehavior.jpgIt’s not every day that we see a new journal emerging. However, Springer now launches a new journal called Brain Imaging and Behavior. According to the mission statement, the goal of the journal is to

publish innovative, clinically-relevant research using neuroimaging approaches to enhance the understanding of neural mechanisms underlying disorders of cognition, affect and motivation, and their treatment or prevention.

In this sense, the journal seems to have the ultimate goal of disease understanding and treatment. However, as they write, research on individual differences in representation of normal functions is important as well. What I find particularly interesting is that “brain imaging” is taken to imply a whole range of imaging methods in the study of the brain. It involves everything from the higher cognitive functions to molecular imaging methods. This implies the different approaches that involves genetics, behaviour and neuroimaging, AKA imaging genetics.

Brain Imaging and Behavior sounds like a very interesting initiative. I had problems finding the first online articles (this link). And would it not be better if the journal, just as any science journal, was free? It would be good if they followed the example of PLoS.


Read Full Post »

har1expression.jpgThe race is on to pinpoint how the human genome has changed since the last common ancestor of chimps and hominids. With more and more genomes being sequenzed it has become possible to compare species and locate the regions where DNA has remained static over the last ~5 million years, and where it has evolved rapidly. An extremely exiting paper reporting such a genome-wide scan in a range of animals, including humans, has been put on-line today at the Nature website. The paper identifies 49 so-called Human Accelerated Regions (HAR) where sequences are evolutionary conserved among many mammals but has diverged rapidly in humans since the last chimp-human ancestor. The fastest among these, dubbed HAR1, has accrued 18 changes in this time. As it turns out, HAR1 is not a protein-coding sequence but belongs to a non-coding RNA sequence. This is rather interesting since geneticists (and evolutionary psychologists) generally assume that adaptations work on protein-coding genes. This new result may indicate that many of the adaptations setting the human genome apart from the chimp genome can have taken place outside of the genome’s protein-coding sequences.

Equally interesting is the fact that one of the two RNA genes containing HAR1, HAR1F, is expressed in human Cajal-Retzius cells that are thought to play a crucial role in redirecting migrating neurons during development. Thus, HAR1F may possibly be linked to the expansion of the human cortex starting 2 million years ago. In a short news article published in today’s Nature Gerton Lunter speculates on what HAR1F‘s functional role might be:

What the gene does is a mystery, but there are some guesses. “Given that it’s changed so dramatically only for humans, it might be involved in humanspecific brain wiring,” says Gerton Lunter at the University of Oxford, UK. One thing is becoming clear: proteincoding genes may not be the movers and shakers of human evolution scientists once thought. “We should stop looking at proteins and start looking at noncoding DNA,” says Lunter. “Everything points in that direction.”

Now, if we could just manage to extract DNA samples from the various hominids fossils linked with major evolutionary changes (homo erectus and so on), we might some day end up with a clear picture of how the hominid brain has evolved. What an amazing triumph that would be!

By the way, in last week’s Science Paul Mellars had a nice review showing how genetical analysis also illuminates our understanding of the dispersal of modern human populations out of Africa some 60,000 years ago. You might want to read that as well.


Pollard, K. (2006): An RNA gene expressed during cortical development evolved rapidly in humans. Nature, in press.

Mellars, P. (2006): Going east: New genetic and archaeological perspectives on the modern human colonization of Eurasia. Science 313: 796-800.


UPDATE: You can find nice comments on this truly pathbreaking paper at Carl Zimmer’s, John Hawks’ and the Gene Expression weblogs.

Read Full Post »

Recently Thomas wrote about a paper by Yulia Kovas and Robert Plomin in the May issue of TICS discussing the implications of the fact that a great number of genes – dubbed “generalist” genes – affect not one, but most cognitive abilities. One obvious implication is that, if most genes being expressed in the brain affect several areas of the brain, the massive modularity hypothesis (MMH) might not hold true. As Kovas and Plomin wrote in the conclusion to their paper:

Our opinion outlined in this article is that the generalist genes hypothesis is correct and that genetic input into brain structure and function is general (distributed) not specific (modular). The key genetic concepts of pleiotropy and polygenicity increase the plausibility of this opinion. Generalist genes have far-reaching implications for cognitive neuroscience because their pleiotropic and polygenic effects perfuse the transcriptome, the proteome and the brain. This is more than a ‘life-is-complicated’ message. DNA and RNA microarrays provide powerful tools that will ultimately make it possible for cognitive neuroscience to incorporate the trait-specific genome and transcriptome even if hundreds of genes affect individual differences in a particular brain or cognitive trait. The more immediate impact of generalist genes will be to change the way in which we think about the relationship among the genome, the transcriptome and the ‘phenome’ of the brain and cognition.

As Thomas was quick to remark, this idea is of course sure to infuriate proponents of the MMH. Therefore, it comes as no surprise that Gary Marcus and Hugh Rabagliati has a letter in next month’s TICS criticizing Kovas and Plomin’s article. Here is their argument for upholding the MMH:

Genes are in essence instructions for fabricating biological structure. In the construction of a house, one finds both some repeated motifs and some specializations for particular rooms. Every room has doors, electrical wiring, insulation and walls built upon a frame of wooden studs. However, the washroom and kitchen vary in the particulars of how they use plumbing array fixtures, and only a garage is likely to be equipped with electric doors (using a novel combination of electrical wiring and ‘doorness’). Constructing a home requires both domain-general and domain-specific techniques. The specialization of a given room principally derives from the ways in which high-level directives guide the precise implementation of low-level domain-general techniques. When it comes to neural function, the real question is how ‘generalist genes’ fit into the larger picture. Continuing the analogy, one might ask whether different ‘rooms’ of the brain are all built according to exactly the same plan, or whether they differ in important ways, while depending on common infrastructure. Kovas and Plomin presume that the sheer preponderance of domain-general genes implies a single common blueprint for the mind, but it is possible that the generalist genes are responsible only for infrastructure (e.g. the construction of receptors, neurotransmitters, dendritic spines, synaptic vesicles and axonal filaments), with a smaller number of specialist genes supervising in a way that still yields a substantial amount of modular structure.

The interesting thing about this discussion between Plomin and Marcus is the fact that the question that they raise can be investigated empirically, as Kovas and Plomin note in a reply to Marcus and Rabagliati:

Finding high genetic correlations means that genes must be generalists at the psychometric level at which
these traits have been assessed. Therefore, a genetic polymorphism that is associated with individual differences in a particular cognitive ability will also be associated with other abilities. The question is how these generalist genes work in the brain. Does a genetic polymorphism affect just one brain structure or function, which then affects many cognitive processes, as suggested by a modular view of brain structure and function (mechanism 1 in [Kovas and Plomin’s original article])? This model assumes that brain structures and functions are not genetically correlated – genetic correlations arise only at the level of cognition. Another possibility, which we think is more probable, is that the origin of the general effect of a genetic polymorphism is in the brain because the polymorphism affects many brain structures and functions (mechanisms 2 and 3 in [Kovas and Plomin’s original article]). Of course, some polymorphisms might have general effects via mechanism 1 and other polymorphisms might have general effects via mechanisms 2 and 3, as Marcus and Rabagliati suggest. Fortunately, this is an empirical issue about DNA polymorphisms that does not require resorting to metaphors such as house-building. We did not say that the case for mechanism 3 was proven, which is what Marcus and Rabagliati imply with their partial quote. The full quote from our article is: ‘In our opinion, these two key genetic concepts of pleiotropy and polygenicity suggest that the genetic input into brain structure and function is general not modular’. Pleiotropy (in which a gene affects many traits) is a general rule of genetics. Polygenicity (in which many genes affect a trait) is becoming another rule of genetics for complex traits and common disorders. As we point out, polygenicity greatly multiplies and magnifies the pleiotropic effects of generalist genes. A more empirical reason for suggesting that the origin of generalist genes is in the brain is that gene-expression maps of the brain generally indicate widespread expression of cognition related genes throughout the brain.

I second that sentiment. It would be a big step forward if the massively modularity discussion would move beyond mere speculation and become grounded in empirical data.


Kovas, Y. & Plomin, R. (2006): Generalist genes: implications for the cognitive sciences. Trends in Cognitive Science 10: 198-203.

Marcus, G. & Rabagliata, H (2006): Genes and domain specificity. Trends in Cognitive Science, in press.

Kovas, Y. & Plomin, R. (2006): Response to Marcus and Rabagliata. Trends in Cognitive Science, in press.


Read Full Post »

huxley.jpgI just saw the TV documentary “The root of all evil?” hosted by Richard Dawkins, who is one of today’s most ardent defenders of evolutionary theory, and of science in general. Over two episodes Dawkins argues that the world would be better off without religion, since it distorts our view of ourselves as human beings and our place in the universe, of society and of morals and law.Dawkins seeks out confrontation with religious leaders such as Pastor Ted Haggard in The New Life Church in Colorado Springs, and Joseph Cohen, now Yousef al-Khattab in Jerusalem, an American-born Jew who came to Israel as a settler before converting to Islam. On both occasions, Dawkins attempts to discuss the difference in their respective understandings of humanity, the world, morals and so on. The problem is that neither Haggard or al-Khattab agree that scientific evidence must be regarded as true. Basically, they think that science is “just another religion”, and that you have to believe it, just as you do in God or Allah.

So what does Dawkins do, this ardent proponent of evolution? He goes numb! No doubt he is utterly suprised of the backwards logic that the religious representatives display. But why doesn’t he do as he says in the beginning of his documentary:

The time has come for people of reason to say: enough is enough. Religious faith discourages independent thought, it’s divisive, and it’s dangerous.

No, Dawkins goes numb and has little else to say, it seems. Although Wikipedia gives you the details of this documentary, it does not convey fully how Dawkins fails to apply his otherwise crisp and clear logic, as you can read in any of his books.

I think what we need is a new Darwin’s bulldog, or maybe a Dawkin’s bulldog? Or even better, evolution needs a butcher. We need someone who can piece out every single misinformation, failed logic and stupidity that can be found in religion. We need it to happen on-line, in front of our eyes, on the tele, on the radio. We need someone to ridicule religious belief, to show how it relies on nothing but anecdotes presented as axioms. That they are cultural memes, potential lethal mental viruses. Someone who asks “why should I believe in your religion rather than this over here?”, “why cannot my self-made, armchair philosophy religion be just as good as yours?”. Someone who get rabbis, muftis and bishops together to tell us what’s really right, why we should listen to only some of the Bible, or Koran, and decline other parts as culturally twisted tales.

Science has played the nice guy all along, because science is not a movement. It’s a method. It’s a way of asking “is this true, and can you prove it or reject it?”. How you choose to study a phenomenon is your own choice, but you open up to test whether theory A, B, and C are correct, whether only one is correct or whether all are false.

So we need a butcher, probably a whole association of evolutionary butchers. Let’s call them Dawkins’ Butchers. Who wants to join?


Read Full Post »

beer.jpgAs noted in my previous post, the impact of alcohol on brain maturation in adolescence is still considered an open question, although studies indicate that early exposure to alcohol is even more damaging in adolescence than in adulthood. It’s not surprising at all. Alcohol crosses the blood-brain barrier to influence the function of neurons. Actually, influence is not the right term: intoxicates or poisons the brain is more correct. After all, the effect of alcohol on your state of mind is due to a state of intoxication.Alcohol is of course only one of many substances that have an impact on brain function, and that are used for recreational purposes. Other psychoactive substances include nicotine/tobacco, cannabis, cocaine and LSD. In a population of adolescents the young vary in their use of such drugs, both the debut, the regularity of use, and the combined – or polydrug – use. The question is, then, what causes this variation?

In a special issue of Behavioral Genetics the genetic and environmental causes of substance use are explored and reviewed. For example, Jason Pagan and colleagues study the causes of alcohol use in adolescence, and conclude that

(…) there was no significant evidence of shared environmental influences on alcohol problems in early adulthood. Problems were largely influenced by genetic factors that overlapped with genetic influences on frequency of use. Unique environmental factors were largely specific to each stage, with some overlap between alcohol problems and frequency of use at age 25.
Danielle Dick and her colleagues, on the other hand, find that a specific gene, GABRA2, shows two specific patterns in relation to adolescent and adult alcohol abuse. First they found that aconsistent elevation in risk for alcohol dependence associated with GABRA2 is not evident until the mid-20s and then remains throughout adulthood. On the other hand, GABRA2 was also associated with other drug dependence in their sample, both in adolescence and adulthood. So this gene can indeed be a causative factor in the forming of drug use in general, which some findings seem to indicate. GABRA2 has been shown several times to be coupled to alcoholism. For example, Danielle Dick herself has published data showing a complex relationship between martial status, alcohol dependence and GABRA2, concluding in another publication this year:

These analyses provide evidence of both gene-environment correlation and gene-environment interaction associated with GABRA2, marital status, and alcohol dependence. They illustrate the complex pathways by which genotype and environmental risk factors act and interact to influence alcohol dependence and challenge traditional conceptualizations of “environmental” risk factors.
Anyway, the special issue in Behavior Genetics has several good articles on the gene-environment interaction effects on the development of substance use disorders. It’s a must-read to anyone interested in genes, brain and behaviour.


Read Full Post »

Older Posts »