Archive for the ‘cognitive science’ Category

trends.jpgI am a sucker for lists, so please bear with me: In a forthcoming editorial, Shbana Rahman, the editor of the great journal Trends in Cognitive Sciences, is celebrating the ten years anniversary of TICS by printing short reflections on what has been the most “exciting discovery or theory of the past ten years” by a number of fat cats in the cognitive neurosciences: John Anderson, Nick Chater, Jon Driver, Jerry Fodor, Marc Hauser, Phil Johnson-Laird, Steven Kosslyn, Jay McClelland, George A. Miller, Lynn Nadel, Steven Pinker, Zenon Pylyshyn, Trevor Robbins, and Vincent Walsh. Naturally, there are as many different answers as people asked: the shift from computational models to probalistic models (Chater), Gergely and Csibra’s experiments on rational imitation in infants (Hauser), research on how intuitions determine judgments (Johnson-Laird), mirror neurons (Nadel), etc., etc. The funniest entry by far is Fodor’s:

What with brain imaging and neural nets, it will be a hard ten years to forget. But I’m working on it. Hopes
for the future: (i) the further erosion of attempts to apply the adaptationist paradigm to the evolution of
cognitive and linguistic phenotypes; concurrently, its replacement by an account that stresses the ‘‘hidden’’
constraints on phylogeny imposed by neurology, genetics, biochemistry, ontogeny and so forth; (ii)
the development of a serious referential/causal semantics for mental representations.

My own suggestions, just of the top of my head, would be:

(1) The rapidly growing understanding of the role played by emotions in various forms of “higher” cognition.

(2) Research on on the interplay between genes, brain processes and the environment in producing behaviour – especially development. Hopefully outdated words such as “innate” will soon dissapear from the vocabulary of cognitive neuroscientists.

(3) Decision-making. By which I mean research on making a judgment, forming preferences for possible choices, neuroeconomics, neuromarketing, and neuroethics.

What would be your choices?


Read Full Post »

figure1.jpgIs binding the single most important concept in neuroscience? I think it is, even without making the concept too general or vague. On the contrary, binding seems to be a general concept to understand the workings of the brain. No more need for modules of perception, cognition, memory and action. Binding is the solution.

More specifically, what is binding? Or, to reframe the question 100%: what happens when the brain works? To many, the brain binds information together at all levels throughout the brain. If you perceive an object, that particular object is a mixture between colour, form, position, movement etc., that is bound together. Because of you look at the early sensory processes in the brain, we know that the features of an object are treated by separate processes in the brain. Accordingly, they can be lesioned separately, leading to e.g. acquired colour blindless but with intact movement perception.


Read Full Post »

In an interesting paper in the latest version of Progress in Neurobiology, Yuri I. Arshavsky from UCSD writes about the epistemological dualism that exists in modern neuroscience. basically, Arshavsky claims that there is a covert dualism in the way that neuroscientists are treating mind-related topics, especially the study of “consciousness”. Indeed, as he claims:

This covert dualism seems to be rooted in the main paradigm of neuroscience that suggests that cognitive functions, such as language production and comprehension, face recognition, declarative memory, emotions, etc., are performed by neural networks consisting of simple elements.

This might initially sound a bit strange. Is not cognitive functions such as face perception due to operational simple elements? Face perception as such is a combination of many simple processes that operate in unison. So what is Arshavsky proposing? Indeed he suggests the existence of a certain kind of brain cells:

(The) performance of cognitive functions is based on complex cooperative activity of “complex” neurons that are carriers of “elementary cognition.” The uniqueness of human cognitive functions, which has a genetic basis, is determined by the specificity of genes expressed by these “complex” neurons. The main goal of the review is to show that the identification of the genes implicated in cognitive functions and the understanding of a functional role of their products is a possible way to overcome covert dualism in neuroscience.

So there should exist a subset of neurons that integrate information from a variety of input. This sounds strange, since all neurons integrate inputs from thousands of inputs, many from a large variety of inputs. So what are complex neurons? Here, we are told that:

(…) neural networks involved in performing cognitive functions are formed not by simple neurons whose function is limited to the generation of electrical potentials and transmission of signals to other neurons, but by complex neurons that can be regarded as carriers of “elementary” cognition. The performance of cognitive functions is based on the cooperative activity of this type of complex neurons.

In this way, complex neurons seem to be integrative neurons, i.e. cells that integrate information from a variety of processes. This could include the multi-modal neurons found in the functional sub-structures of the medial temporal lobe, such as the hippocampus, perirhinal, entorhinal and temporopolar cortex. But would it not mean the colour processing nodes in the visual cortex? Which IMO leads us back to a basic question: what is a functional unit in the brain. yes, the neuron is a basic building block of information processing in the brain. But what is special about language, memory and so forth in the brain?

It is possible that Arshavsky is not radical enough: what we should seek out is to avoid using generalistic and folk-psychological concepts in the first place. We should possibly not study “language”, “memory” or “consciousness”, since these concepts will always allude to fundamental assumptions of “language-ness”, “memory-ness” and “consciousness-ness”, IOW that there is something more to explain after we have found out how the brain produces what we recognize and label a cognitive function.

Maybe neuroscientists are not using a poor strategy after all? Maybe ignoring the past history of philosophy of mind is the best solution. I’m not sure (nor am I sure that I represent Arshavsky’s view properly). But how we choose to label a cognitive function depend on our past historical influence and learning, as well as our current approach.


Read Full Post »

mtl.jpgTake any textbook on cognitive neuroscience. If you go through the book you wil see that there are chapters on perception (e.g. vision), memory, and language. Each chapter has its own vocabulary, theories and experimental evidence. Each chapter may even have been written by different authors (i.e. authorities).

Once reading such a book you will have knowledge about how visual input is processed from the initial steps in the retina, through the thalamic nuclei, and in the visual cortex, just as well as you will learn that as you perceive something as an object you will make use of areas in the temporal lobe, including the fusiform gyrus. You will have learned that memory — especially episodic and semanic memory — is a result of activity occurring in the medial temporal lobe, and especially hippocampus. You will know that theories of language and semantics point to the temporal lobe as important for its functioning.

All in all, you have a nice impression of how the brain is responsible for different perceptual and cognitive functions. But think now of the three examples: they all seem to imply the temporal lobe as important for their functioning. So does this mean that visual perception, memory and language resides in different, non-overlapping parts of the temporal lobe? If so, how do these areas or modules communicate with each other?. What is the lingua franca of neurons comunicating information fro the visual senses to memory and semantics? Add on top of this that parts of the temporal lobe has been implemented in many other functions, including hearing (e.g. the planum temporale and Heschl’s gyrus) and odour processing (e.g. the entorhinal cortex). How does this combine with the other functions? Should we see the temporal lobe as a patchwork of distinct and neatly segregated functions?

For a long time the predominating view of the temporal lobe has been a strictly modular one: one part of the lobe processes visual input, there are language and memory modules. Non-overlapping parts of a lobe that are tuned to process one kind – but not other kinds – of information.

But this view is changing dramatically. Today, following researchers such as Elisabeth Murray, David Gaffan and others (especially from the universities of Cambridge and Oxford, UK) the standard view of temporal lobe function is changing. Instead of a functionally segregated model of the temporal lobe, these researchers now suggest that the lobe has an entirely different way of functioning. In this area, often referred to as the medial temporal lobe one has now documented not only multiple cognitive functions in a brain area once thought to be dedicated to memory, but also redundancy between the structures. Some examples:

  1. There is a functional specialization within the rhinal cortices beyond the involvement in memory: the entorhinal cortex is involved in odour perception as well as multi-modal conjunct perception, i.e. the perception of the entirety of a scene, including sights, sounds and more. The perirhinal cortex is involved in novelty processing, higher-order visual conjunct perception and discrimination, as well as high-specificity semantic processing.
  2. Specific and small anatomical regions are involved in different cognitive functions. For example, the perirhinal cortex has been shown to be involved in memory processes (particularly visual object encoding, but also other forms), novelty processing, semantic processing and high-order visual perception and discrimination

While point 1 does not conflict with a modular view of the brain-mind, point 2 poses a serious problem to any modularist view of the human mind and brain. In many respects, findings now converge on a view of the brain that stresses functional redundancy and degeneracy. In other words: A) one structure can participate in many different functions; and B) many structures are necessary parts of any given cognitive function. A mapping of a 1:1 relationship between a cognitive function and its wetware is thus unsupported by today’s knowledge.

So take that cog-neurosci textbook again, scroll through its pages and ask yourself: how are these cognitive functions connected? Better still, take the chapter to your supervisor, lecturer or whoever you want and ask: “how does the temporal lobe deal with memory, language, visual perception and other multi-modal operations, and how are these processes tied together?” It would be interesting to hear the replies you get.


Read Full Post »

Recently Thomas wrote about a paper by Yulia Kovas and Robert Plomin in the May issue of TICS discussing the implications of the fact that a great number of genes – dubbed “generalist” genes – affect not one, but most cognitive abilities. One obvious implication is that, if most genes being expressed in the brain affect several areas of the brain, the massive modularity hypothesis (MMH) might not hold true. As Kovas and Plomin wrote in the conclusion to their paper:

Our opinion outlined in this article is that the generalist genes hypothesis is correct and that genetic input into brain structure and function is general (distributed) not specific (modular). The key genetic concepts of pleiotropy and polygenicity increase the plausibility of this opinion. Generalist genes have far-reaching implications for cognitive neuroscience because their pleiotropic and polygenic effects perfuse the transcriptome, the proteome and the brain. This is more than a ‘life-is-complicated’ message. DNA and RNA microarrays provide powerful tools that will ultimately make it possible for cognitive neuroscience to incorporate the trait-specific genome and transcriptome even if hundreds of genes affect individual differences in a particular brain or cognitive trait. The more immediate impact of generalist genes will be to change the way in which we think about the relationship among the genome, the transcriptome and the ‘phenome’ of the brain and cognition.

As Thomas was quick to remark, this idea is of course sure to infuriate proponents of the MMH. Therefore, it comes as no surprise that Gary Marcus and Hugh Rabagliati has a letter in next month’s TICS criticizing Kovas and Plomin’s article. Here is their argument for upholding the MMH:

Genes are in essence instructions for fabricating biological structure. In the construction of a house, one finds both some repeated motifs and some specializations for particular rooms. Every room has doors, electrical wiring, insulation and walls built upon a frame of wooden studs. However, the washroom and kitchen vary in the particulars of how they use plumbing array fixtures, and only a garage is likely to be equipped with electric doors (using a novel combination of electrical wiring and ‘doorness’). Constructing a home requires both domain-general and domain-specific techniques. The specialization of a given room principally derives from the ways in which high-level directives guide the precise implementation of low-level domain-general techniques. When it comes to neural function, the real question is how ‘generalist genes’ fit into the larger picture. Continuing the analogy, one might ask whether different ‘rooms’ of the brain are all built according to exactly the same plan, or whether they differ in important ways, while depending on common infrastructure. Kovas and Plomin presume that the sheer preponderance of domain-general genes implies a single common blueprint for the mind, but it is possible that the generalist genes are responsible only for infrastructure (e.g. the construction of receptors, neurotransmitters, dendritic spines, synaptic vesicles and axonal filaments), with a smaller number of specialist genes supervising in a way that still yields a substantial amount of modular structure.

The interesting thing about this discussion between Plomin and Marcus is the fact that the question that they raise can be investigated empirically, as Kovas and Plomin note in a reply to Marcus and Rabagliati:

Finding high genetic correlations means that genes must be generalists at the psychometric level at which
these traits have been assessed. Therefore, a genetic polymorphism that is associated with individual differences in a particular cognitive ability will also be associated with other abilities. The question is how these generalist genes work in the brain. Does a genetic polymorphism affect just one brain structure or function, which then affects many cognitive processes, as suggested by a modular view of brain structure and function (mechanism 1 in [Kovas and Plomin’s original article])? This model assumes that brain structures and functions are not genetically correlated – genetic correlations arise only at the level of cognition. Another possibility, which we think is more probable, is that the origin of the general effect of a genetic polymorphism is in the brain because the polymorphism affects many brain structures and functions (mechanisms 2 and 3 in [Kovas and Plomin’s original article]). Of course, some polymorphisms might have general effects via mechanism 1 and other polymorphisms might have general effects via mechanisms 2 and 3, as Marcus and Rabagliati suggest. Fortunately, this is an empirical issue about DNA polymorphisms that does not require resorting to metaphors such as house-building. We did not say that the case for mechanism 3 was proven, which is what Marcus and Rabagliati imply with their partial quote. The full quote from our article is: ‘In our opinion, these two key genetic concepts of pleiotropy and polygenicity suggest that the genetic input into brain structure and function is general not modular’. Pleiotropy (in which a gene affects many traits) is a general rule of genetics. Polygenicity (in which many genes affect a trait) is becoming another rule of genetics for complex traits and common disorders. As we point out, polygenicity greatly multiplies and magnifies the pleiotropic effects of generalist genes. A more empirical reason for suggesting that the origin of generalist genes is in the brain is that gene-expression maps of the brain generally indicate widespread expression of cognition related genes throughout the brain.

I second that sentiment. It would be a big step forward if the massively modularity discussion would move beyond mere speculation and become grounded in empirical data.


Kovas, Y. & Plomin, R. (2006): Generalist genes: implications for the cognitive sciences. Trends in Cognitive Science 10: 198-203.

Marcus, G. & Rabagliata, H (2006): Genes and domain specificity. Trends in Cognitive Science, in press.

Kovas, Y. & Plomin, R. (2006): Response to Marcus and Rabagliata. Trends in Cognitive Science, in press.


Read Full Post »

save-brain2.jpgDoes talking in your mobile phone influence the workings of your brain? Yes, claims a new study in Neuropsychologia of healthy volunteers. But it’s not only bad, it seems; some cognitive functions become better during mobile phone radiation.

Mobile phone radiation and health concerns have been raised since the 1990s, especially following the enormous increase in the use of wireless mobile telephony throughout the world. This is because mobile phones use electromagnetic waves in the microwave range. These concerns have induced a large body of research (both epidemiological and experimental, in non-human animals as well as in humans). Concerns about effects on health have also been raised regarding other digital wireless systems, such as data communication networks.

Although previous studies have shown mixed and often conflicting results, these studies have been criticized for having a low statistical power. The current study included 120 subjects to improve the statistical power of the analyses; 58 males and 62 females from 18 to 70 years. Mind you, such a spread in age could actually be a confounding variable, and an age effect should have been included in the analysis. To that, the current study does not seem to have controlled for age and gender effects on the age sample.

The researchers gave the subjects a number of different cognitive tasks tapping into cognitive functions such as reaction time, encoding, verbal comprehension, and working memory. Radiofrequency radiation was induced through a regular Nokia mobile phone placed on a helmet that subjects wore during cognitive testing. The study was a double-blinded setup, so that neither researchers nor the subjects knew if the cellular phone was transmitting (or emitting radiofrequency radiation). Measures were taken to make sure that neither sound or heating would lead subjects to detect when the phone was transmitting. Subjects performed the tests (different versions of the test each time) during radiofrequency stimulation (both sham and real stimulation).
What was found was that the performance during radiofrequency exposure, compared to sham condition, changed theperformance on several tests. While the performance on reaction time decreased during exposure, performance on the Trail Making test B, which loads working memory, was increased during exposure. As the researchers write:

The results of this study provide statistical evidence of a cognitive difference in performance between the real and sham field [mobile phone] exposure conditions. The negative effects of [radiofrequency] exposure on [reaction time, RT] performance indicate that the more basic functions were adversely affected by exposure. In contrast, the improved RT for the working memory task suggests that [radiofrequency] exposure has a positive effect on tasks requiring higher level cortical functioning, such as working memory.

The results are also very interesting because several reports now support the view that using a cellular phone while driving leads to a reaction time comparable to that of having several alcoholic drinks. Through this study it seems that it’s not only the talking in the phone that pulls your reaction time down; it’s also the mere radiation itself.

But are the conclusions warranted? For one thing: the cognitive measures being made are rather crude. The reactions time measures should be less problematic, but as for concluding that working memory increases due to a Trail Making B test score only is really invalid. The Trail B is indeed a measure of how people can switch from one serial counting mode to another (numbers or alphabet). Working memory is really much more than keeping track of only one number or letter back. What should be applied is a so-called parametric n-back task (PDF). In addition, a better test battery should be applied. Although the current focus has been on reaction time, it now seems likely that effects on higher-order cognitive functions may take place, too. However, in order to make firm conclusions of such, we need better cognitive test batteries.

What, then, happens to our cognitive brains after prolonged exposure to radiofrequency waves? Here, we can only speculate, or as the authors point out:

The implications of this study can only be directed towards the effects of short-term exposure of [mobile phone radiofrequency radiation] on cognition. Further longitudinal research is required to determine the effects that long-term use of [mobile phones] (years) may have on health and cognition.

It’s still early days for studies on mobile phones, radiation and brain funcitoning. Many studies have used a less than optimal sample size and problematic experimental designs. Nevertheless these results seem to point out that just being exposed to radiofrequency radiation from a cellular phone induce changes in the brain’s workings. And now I start wondering whether my subjects in the MR scanner also change during scanning…


Read Full Post »

icon_psychoanalysis.jpgToday we received this nice email from Paul Watson at Psychology Press. They are launching a new site for cognitive neuroscience news. I’ll let the email speak for itself:

Hi Martin & Thomas

Just a quick note to say we’ve recently launched a new Cognitive Neuroscience Arena which I think might be of interest to you two.

(We = Psychology Press, publishers of the journal Social Neuroscience, which you commented on in your blog post on July 4th)

We’ve included a link to the Brain Ethics blog on our blogs page.

As well as all our relevant books and journals, we’ve included a few other features that may be of interest to you and your readers:

1. The whole of the first chapter of our textbook “The Student’s Guide to Cognitive Neuroscience” is available to read free online (we think it’s a great introduction to the subject)

2. In a similar vein, we’ve also got the introductory article from our journal Social Neuroscience, also available to read free online (this is the same one which is on the Social Neuroscience journal website which you posted about).

3. There’s also a page of links to the latest Cognitive Neuroscience blog posts (courtesy of Technorati)

4. An a nifty GoogleMap showing forthcoming Cogntitive Neuroscience conferences (only 3 we know of at time of writing) at http://www.cognitiveneurosciencearena.com/resources/conferences.asp

And numerous other features including an RSS feed of our latest Cogntive Neuroscience books.

I’ve sent the link to your blog to Rose Allet who runs the marketing for the Social Neuroscience journal here at Psychology Press, so she may also email you and will probably send the URL of your blog to the editors of Social Neuroscience so they can see your comments).

If you’ve got any questions, feel free to drop me a line.


Paul Watson

Paul Watson, Senior E-Marketing Executive
Psychology Press



Read Full Post »

Older Posts »