Early-life Trauma Compromises Childrens’ Ability to Process Emotions and May Predict Neuropsychiatric Illness Later in Life

A substantial amount of research demonstrates that an early-life trauma can profoundly compromise the way in which emotional information is processed, and can predict neuropsychiatric illness, in adulthood.

In a recent fMRI study published in the Journal of Neuropsychopharmacology, Marusak et al (2015) hypothesized that children who experienced trauma would have a more difficult time ignoring emotional cues that contradicted emotional expressions in photographs. In addition, they postulated that such children would display a decreased ability to regulate emotional conflict when presented with mood incongruent stimuli.

To test this hypothesis Marusak et al (2015) utilized the Emotional Conflict Task, a test that uses emotional distractors that directly contradict emotional information within a task. All participants in the study, those who had experienced trauma and those who had not, viewed a series of photographs that paired an emotional word with a facial expression. For example, a photograph of an angry face might be displayed with the word ‘HAPPY’ written over top of it. The participants were timed on their ability to indicate the emotion displayed by the person in the photograph.

In their fMRI analysis, Marusak et al (2015) focused chiefly on the amygdala-pregenual cingulate cortex (pgACC) network because the amygdala plays a critical role in the processing of emotional information, and the pgACC has previously been identified as a pathway involved in ‘emotional conflict regulation’ via the suppression of amygdalar activity.

There were three significant findings. First, children and adolescents who experienced trauma displayed increased amygdalar reactivity when presented with incongruent emotional stimuli—what the researchers defined as ‘emotional conflict.’ Second, children and adolescents exposed to trauma had difficulty regulating emotional conflict, meaning it took them longer to identify the depicted facial expressions and they did so with less accuracy. Related to this finding, Marusak et al (2015) also observed an inability of traumatized participants to regulate the dorsolateral prefrontal cortex (DLPFC), an area known for it’s involvement in effortful attentional control. Third, activation of the amygdala in response to emotional conflict was linked to dampened reward sensitivity in children and adolescents with trauma exposure.

According to Marusak et al (2015), trauma-exposed youth, who show exaggerated amygdala reactivity when presented with emotional conflict, likely have altered neural networks responsible for attending to salient emotional cues—meaning that their heightened amygdalar activity results in a hypervigilance for emotional cues.

Adults with depression and anxiety disorders have also been shown to have similar deficits in the amygdala-pgACC pathway as well as elevated DLFPC activity. While such alterations could be the result of illness, researchers suggest that they may also indicate lasting changes in circuitry in response to early trauma.

 
Marusak, H. A., Martin, K. R., Etkin, A., & Thomason, M. E. (2015; 2014). Childhood trauma exposure disrupts the automatic regulation of emotional processing. Neuropsychopharmacology : Official Publication of the American College of Neuropsychopharmacology, 40(5), 1250. doi:10.1038/npp.2014.311

High Motivation Linked to Creativity in Bipolar Disorder

A considerable amount of research suggests that bipolar disorder (BD) is linked to creativity. However, not much is known yet about the underlying mechanisms that may facilitate heightened creativity among those with BD or those at risk for developing it. Many have speculated that creative abilities associated with BD may be influenced by cognitive advantages that occur during mania, such as an influx of ideas and increased stamina. In a recent paper in the Journal of Affective Disorders, Ruiter & Johnson (2015) suggest that creativity may be linked to the alternate sorts of motivations that are common amongst those with BD. They suggest that inherent differences in motivation in such individuals with BD may act as an impetus for creative accomplishments. To explore this link between creativity and BD, Ruiter & Johnson administered a large battery of questionnaires that measured participants’ lifetime creative achievements, personality traits related to creativity, and their levels of intrinsic and extrinsic motivation.

Intrinsic motivation involves having the desire to partake in a task or activity because one finds it personally rewarding and stimulating. While a person is engaged in activities that produce intrinsic rewards, they often have the experience of achieving greater insight, and what is sometimes referred to as ‘flow.’ Flow is described as a feeling of pleasure associated with being fully focused and absorbed within a task (e.g., the person may not notice the passage of time). Many creative pursuits present the opportunity for one to be rewarded intrinsically. Extrinsic motivation, on the other hand, describes motivation that is influenced by forces outside of an individual, whereby a person is driven to perform a given behaviour because of external rewards such as money, or social recognition.

Based on the self-reports of participants in their study, Ruiter & Johnson suggest that creativity in individuals with BD may be partly attributed to such individuals having higher levels of both intrinsic and extrinsic motivation. Participants with BD and those considered vulnerable to developing BD were shown to set remarkably higher goals, exhibited prolonged engagement when executing tasks, were more highly motivated to pursue rewards, and tended to value exceptionally higher lifetime aspirations than those without BD. They also found that BD was related to social dominance, in that individuals with BD demonstrated a longing to have their creative achievements positively regarded by others, and that this desire appeared to help facilitate success.

Ruiter & Johnson also cleverly point out that drawing attention to the creativity that is commonly associated with BD may help reduce the stigma associated with this illness.

 
Ruiter, M., & Johnson, S. L. (2015). Mania risk and creativity: A multi-method study of the role of motivation. Journal of Affective Disorders, 170, 52-58. doi:10.1016/j.jad.2014.08.049

Psychedelic Drugs: A catalyst for mental illness or urban myth?

 

440px-Pink_Elephants_on_Parade_Blotter_LSD_Dumbo

For nearly 6000 years, people have  experimented with psychedelic drugs  (e.g., LSD, psilocybin, peyote) as a means  of enhancing mental and spiritual  experiences, and to achieve greater self  insight and growth. More than 30 million  adults in the United States have reported  trying a psychedelic drug at least once in  their life time. Psychedelics can have a  wide range of psychological effects,  including an overwhelming feeling of  enlightenment, feelings of euphoria, intense laughter, and vivid visual and auditory hallucinations.

Recently, researchers have shown a renewed interest in the use of psychedelics for the treatment of several mental health problems including depression, alcoholism, and nicotine addiction. However, research on the therapeutic effects of psychedelic drugs has largely been stifled because many governments, despite the existence of strong empirical evidence for their therapeutic efficacy, classify all psychedelic drugs as a scheduled 1 drug. According to National and International laws, psychedelics are controlled substances because of their presumed addictive properties and potential for great harm. These laws have been partly influenced by the misbelief that psychedelics can precipitate mental illness, or even suicide. Recently, some professionals have even claimed that using psychedelics can lead to ‘hallucinogen persisting perceptual disorder’ (HPPD), wherein a person experiences extreme distress related to the experience of recurring hallucinations long after having used a psychedelic drug. Nonetheless, almost all claims regarding the dangers of psychedelic drug use have been largely based on speculation and isolated case studies and should therefore be examined with caution.

A study that was recently published in the Journal of Psychopharmacology by Johansen and Krebs (2015) sought to examine the reported risk of mental health problems that had been presumed to be associated with psychedelic drug use. Using the National Survey on Drug Use and Health, Johansen and Kreb collated data from participants 18 years and older taken from surveys given from 2008-2011. In total the sample consisted of 135,095 respondents and 19,299 of them reported their lifetime psychedelic substance use. Lifetime psychedelic substance use was considered for participants who reported using mescaline or peyote, LSD, or psilocybin at least once throughout their lives. Johansen and Krebs used four different self-reported criteria as indicators of mental health concerns over the previous year: severe psychological distress, access to mental health and addictions treatment, suicidal ideation, and depression or anxiety. In their analysis of those data, they controlled for an extensive number of variables related to socioeconomic status, psychological functioning and drug taking.

Out of 19,299 participants, Johansen and Krebs were unable to link psychedelic drug use to severe psychological distress, access to mental health and addictions treatment, suicidal ideation, or depression and anxiety. Nor did their findings support evidence for the incidence of HPPD, as had previously been reported in case studies. Interestingly, they found that those who reported lifetime psychedelic drug use actually tended to be less likely to have been admitted to psychiatric care over the previous year. In addition, Johansen and Krebs reported that psychedelic drug users were more likely to be younger, white, unmarried males with slightly higher levels of education and income, had personality styles that endorsed risky behaviour and other drug use, and to have experienced a period of depression before 18 years old.

Johansen and Krebs do acknowledge that it is impossible to make causal inferences from a study of this kind. However, the lack of association between mental health problems and psychedelic drug use across a sample of this size does seem to contradict common beliefs regarding their safety. Hopefully, future experimental studies can expand on these findings.

 
Johansen, P., & Krebs, T. S. (2015). Psychedelics not linked to mental health problems or suicidal behavior: A population study. Journal of Psychopharmacology, 29(3), 270-279. doi:10.1177/0269881114568039

What Neural Systems are Involved in Lucid Dreaming?

Why_books_are_always_better_than_movies

 

For most people, voluntary action, critical thinking, and an awareness of the minds present state are either very limited or completely absent while dreaming. Interestingly, however, those engaged in a lucid dream are able to access these metacognitive skills which can then be used to influence the content of their dreams. In a recent study published in the Journal of Neuroscience, Filevich et al. (2015) identified some specific neural mechanisms that may underlie the experience of lucid dreaming. They suggest that lucid dreaming may actually be a particular form of metacognition that utilizes those same neural systems that are present during thought monitoring while awake.

In the first part of the study by Filevich et al. (2015), participants completed a questionnaire that evaluated their ability to engage in lucid dreaming. Next, the participants were divided into two groups based on their scores from the lucid dreaming questionnaire: A high-lucidity group and a low-lucidity group. The researchers then used structural magnetic resonance imaging (MRI) and functional MRI (fMRI) to assess the neural mechanisms implicated in lucidity. All participants performed both a thought-monitoring task and a non-monitoring task while inside the MRI scanner. During both tasks, participants viewed an analog scale displayed on an overhead screen. In the thought monitoring task, participants were asked to indicate the degree to which they felt their thoughts were externally or internally focused by manipulating a curser on an analogue scale. In the non-monitoring task, participants were simply asked to align the curser to a circle that appeared on the screen.

Filevich et al (2015) found that the high-lucidity group showed greater volumes of grey matter in areas BA9/10 in the prefrontal cortex, and heightened blood-oxygen-level-dependent (BOLD) activity when undergoing the fMRI tests. Areas BA9/10 are known to be involved in metacognitive processes, including involvement in working memory tasks, interpreting abstract information, construing others’ mental states and multitasking. Of particular interest to the study of lucid dreaming, BA9/10 has also been shown to facilitate self-observation and it has been postulated to play a role in shifting between states of consciousness that involve internally- vs. externally-directed cognitive processes.

Filevich et al. (2015) also noted that the high-lucidity group showed increased grey matter within the left hippocampus. Because the hippocampus is known to play a role in one’s ability to recall dreams, the authors suggest that the hippocampus may be implicated in dream lucidity. That is, perhaps a greater hippocampal volume accounts for one’s ability to know with certainty that they are indeed dreaming.

As this study appears to be the first of it’s kind to assess lucidity and metacognitive ability through evaluating neural mechanisms, future research will help validate and expand upon the findings described here.

 

Filevich, E., Dresler, M., Brick, T. R., & Kühn, S. (2015). Metacognitive mechanisms underlying lucid dreaming. The Journal of Neuroscience, 35(3), 1082-1088. doi:10.1523/JNEUROSCI.3342-14.2015

What is Synesthesia?

Jennifer Aniston Neurons: What’s in a Name?

It is not often that I encounter a scientific article that excites me, but the article published by Rodrigo Quiroga in Nature Reviews Neuroscience (August 2012, Volume 13, 587-597) did just that. In his article, Quiroga describes his discovery of Jennifer Aniston neurons in the medial temporal lobes of human patients. These neurons are interesting to biopsychologists because they likely play a major role in certain kinds of human memory, but I think that just about anybody would find them interesting: Few things are more fascinating to humans than the human brain, and Jennifer Aniston neurons are particularly cool.

Quiroga got the opportunity to record neural activity from neurons in the medial temporal lobes (hippocampus, amygdala, medial temporal cortex) of patients who were suffering from severe epilepsy. Prior to the surgical removal of their epileptic foci, electrodes were implanted in their medial temporal lobes to precisely locate the foci. This provided Quiroga and his colleagues with the opportunity to study the response patterns of these neurons.

Remarkably, these neurons responded to concepts rather than to the particulars of stimuli. For example, one of the first neurons to be investigated responded to 7 different photos of Jennifer Aniston, but did not respond to photos of 80 other people or objects. Many more neurons of this type have now been identified: for example, neurons have been identified that selectively respond the Halle Berry, the Sydney Opera House, Diego Maradona, and mother Theresa. Remarkably, these neurons also responded to the printed and spoken names of the particular concepts that they encoded, not just photographs of them. These various human temporal lobe concept neurons have been termed “Jennifer Aniston neurons” after the first such neuron to be discovered.

Quiroga emphasized two points about the selectivity of various Jennifer Aniston neurons. First, it is clear that there is more than one neuron in each brain encoding a particular concept. It has been estimated that humans have concepts for 10,000 to 30,000 things, and if only one neuron responded to each, it is unlikely that the particular neuron that responded to a concept could be identified during the time allowed for testing. Second, it has been discovered that although each Jennifer Aniston neuron is not totally selective. For example, it was subsequently discovered that the neuron that responded to Jennifer Aniston also responded to another person: Lisa Kudrow, Jennifer Aniston’s co-star in the well-known television series, “Friends.”

The discovery of Jennifer Aniston neurons clearly ranks as an important neuroscientific discovery: it is a striking example of how experience influences brain function, and it provides important clues about how the human brain retains concepts. Also, the idea that a single neuron can respond reliably to the image of a particular person or to the sound or sight of her name is thought provoking–a good topic of conversation among friends.

Be that as it may, I must admit that the name itself played an important role in attracting my  interest in Jennifer Aniston neurons: Not many neuroscientific phenomena are named after television or movie personalities. Using Jennifer Aniston’s name for human medial temporal lobe concept neurons is good fun—and I have never found fun and good science to be mutually exclusive. More importantly, this name is easy to remember and immediately reminds every one of the observations that led to the discovery. Thus, generations of students and scientists will benefit from the name.

I wonder whether Jennifer Aniston knows that an important class of human neurons is named after her. If she does, does she fully appreciate their significance?

Non-Obvious Contributors to the Obesity Epidemic

In the US, the incidence of obesity (The World Health Organization defines ‘obesity’ as a body mass index (BMI) of greater than 30) has almost tripled in the last fifty years.  In 1960, the percentage of the US population that was obese was 13%.  By 2010, that figure was 36%.  It is estimated that by 2015, there will be 700 million obese people worldwide (Drummond & Gibney, 2013; but note that the rate of increase in obesity rates is a matter of dispute—see Rokholm, Baker, & Sørensen, 2010).  There is great concern over this rise in obesity rates because of the known deleterious effects of obesity on health and well being.  For example, a general finding is that individuals who are obese are at a higher risk of mortality (even if their blood pressure and blood cholesterol are normal; see Kramer, Zinman, & Retnakaran, 2013).

As you read in Chapter 12 of Biopsychology, researchers have been pondering the reasons for the obesity epidemuc.  The ‘big two’ used to be held up as the major reasons for the epidemic: (1) a decrease in physical activity, and (2) an increase in consumption of unhealthy foods.  However there has been growing dissatisfaction with these two explanations, in large part because the evidence for their playing a significant role in the rise in obesity rates is weak (see McAllister et al., 2009).  Accordingly, researchers have been looking elsewhere for explanations, and a long list has now developed of potential contributors to the epidemic.  The purpose of the present post is to provide a short list of some of the more theoretically interesting alternate explanations for the obesity epidemic: non-obvious contributors to the obesity epidemic.  When considering the following putative contributors, i t is important to emphasize that no single contributor is likely to account for the entirety of the obesity epidemic.  Rather, it is more likely that there are many contributors that have collectively created the ‘perfect storm’ that is the obesity epidemic.

Decreases in Tobacco Smoking

The consumption of tobacco products has been in steady decline since the early 1980s (see Baum, 2009), and this decline closely aligns with the rise in obesity rates (Baum, 2009).  That is, there seems to be a fairly strong inverse relationship between cigarette smoking rates and obesity rates.  What remains to be seen is whether that alignment holds for different countries.  That is, some countries have had declines in smoking rates much more recently: Does the rise in obesity in those countries follow a similar timeline?

Pharmaceutical Iatrogenesis

Iatrogenesis means physician-induced illness.  Pharmaceutical iatrogenesis, in the context of the obesity epidemic, means obesity that is caused by prescription medications.  For many drugs, there has been a rise in the numbers of prescriptions since the late 1970s and early 1980s, and many of those drugs have weight gain as a common significant side effect.  For example, beta blockers, which are commonly prescribed for the treatment of hypertension, have weight gain as a side effect.  Some antidepressant medications (see Serretti & Mandelli, 2009), and many antipsychotic medications, also have weight gain as one of their significant side effects.  What is of particular note with this theory is that the timeline for the rise of many weight-gain-inducing prescription medications closely follows the timeline for the rise in obesity rates (see McAllister et al, 2009).

Epigenetics and Transgenerational Epigenetic Effects

Cloned mice, although usually born with a normal body weight, often develop adult-onset obesity.  This finding is relevant to the present discussion because the process of cloning results in epigenetic abnormalities (see McAllister et al., 2009), and thus highlights the fact that epigenetic mechanisms can have an impact on body weight.

Obesity research has actually been a major contributor to our knowledge base surrounding epigenetic mechanisms, and has been an especially rich source of information regarding transgenerational epigenetic effects (see Youngson & Morris, 2012).

Some of the data supporting the contribution of transgenerational epigenetic effects is epidemiological.  For example, data from Sweden indicate that food availability in grandfathers is positively correlated with the risk of diabetes and cardiovascular disease as well as mortality in grandsons (see Karatsoreos et al., 2013).

A recent review by Karatsoreos et al. (2013) highlighted several manipulations that were associated with a transgenerational epigenetic obesity effect in rodents.  For example, Karatsoreos et al. reported on the work of Jimenez-Chillaron et al. (2009), which showed that, in mice, maternal consumption of a high fat diet (from the time of preconception to the weaning period) leads to an increased body length and reduced insulin sensitivity in offspring and grand-offspring.  Although this is not in itself evidence of the transmission of obesity to the grand offspring, it does highlight the fact that what an organism consumes during its lifespan can have a direct effect on the physiology of their children and grandchildren, and so on.  It seems what we choose to eat has much grander implications than we once thought.

Increase in Marijuana Consumption

Now we have all heard of one side effect of marijuana consumption: the craving for high-calorie foods (the munchies).  Cannabis sativa is known to increase appetite in animals (see Kirkham, 2009).  Could increases in the consumption of marijuana account for the rise in obesity rates?  Relating to the previous topics of epigenetics and transgenerational epigenetic effects, there is now preliminary evidence of a transgenerational epigenetic effect of THC consumption: Hurd and colleagues have provided preliminary data showing that the offspring of parents administered THC exposure (raised by drug-naive surrogate parents) were observed to have impaired motivation, increase anxiety/compulsive behaviours, and increased body weight. (see Karatsoreos et al., 2013).

Other Contributors and Some Potential Solutions

A number of other contributors have been noted besides the ones listed here.  For a relatively comprehensive review, see the reviews by Zinn (2010) and McAllister et al. (2009).

For an excellent recent review article that discusses potential interventions for the obesity crisis, see the paper by Freedman (2011).

References

Baum, C. L.  (2009).  The effects of cigarette costs on BMI and obesity.  Health Economics, 18, 3-19.

Drummond, E. M., & Gibney, E. R.  (2013).  Epigenetic regulation in obesity.  Current Opinion in Clinical Nutrition and Metabolic Care, 16, 392-397.

Flegal, K. M. Kit, B. K., Orpana, H., & Graubard, B. L.  (2013).  Association of all-cause mortality with overweight and obesity using standard body mass index categories.  JAMA, 309, 71-82.

Freedman, D. H.  (2011).  How to fix the obesity crisis.  Scientific American, 304, 40-47.

Jimenez-Chillaron, J. C., …, Patti, M. E. (2009).  Intergenerational transmission of glucose intolerance and obesity by in utero undernutrition in mice.  Diabetes, 58, 460-468.

Karatsoreos, I. N., Thaler, J. P., Borgland, S. L., Champagne, F. A., Hurd, Y. L., & Hill, M. N.  (2013).  Food for thought: Hormonal, experiential, and neural influences on feeding and obesity.  Journal of Neuroscience, 33, 17610-17616.

Kirkham, T. C. (2009).  Cannabinoids and appetite: food craving and food pleasure.  International Review of Psychiatry, 21, 163-171.  doi: 10.1080/09540260902782810.  full text

Kramer, C. K., Zinman, B., & Retnakaran, R. (2013).  Are metabolically healthy overweight and obesity benign conditions?  Annals of Internal Medicine, 159, 758-769.  doi: 10.7326/0003-4819-159-11-201312030-00008.  full text

McAllister, E. J., … Allison, D. B.  (2009).  Ten putative contributors to the obesity epidemic.  Critical Reviews in Food and Nutrition, 49, 868-913.

Rokholm, B., Baker, J. L., & Sørensen, T. I. A.  (2010).  The levelling off of the obesity epidemic since the year 1999—a review of evidence and perspectives.  Obesity Reviews, 11, 835-846.

Serretti, A., & Mandelli, L.  (2010).  Antidepressants and body weight: A comprehensive review and meta-analysis.  Journal of Clinical Psychiatry, 71, 1259-1272.

Youngson, N. A., & Morris, M. J.  (2012).  What obesity research tells us about epigenetic mechanisms.  Philosophical Transactions of the Royal Society B Biological Sciences, 20110337. http://dx.doi.org/10.1098/rstb.2011.0337

Zinn, A. R. (2010).  Unconventional wisdom about the obesity epidemic.  American Journal of the Medical Sciences, 340, 481-491.

The Wondergame

There has been a fair amount of hype surrounding the potential for video games to enhance cognitive performance, although a lot of previous work has focused on how these games affect simpler forms of cognition like visual abilities and considerable debate still exists about whether playing video games can actually produce meaningful benefits.

This past fall, researchers from the Gazzaley Lab at UCSF reported in Nature that playing a simple, multi-task video game (Neuroracer) for just one hour a month could help elderly adults to selectively enhance their multi-tasking abilities.  If replicated, these results could be extremely useful, as they would suggest a readily available strategy for combating cognitive decline in the elderly. Likely for this reason, the September cover of Nature suggests, tongue-in-cheek, that this finding is a “game changer”, and when reported in the New York Times, one MIT neuroscientist apparently stated that playing the game was powerful enough to make older individuals “cognitively younger”. In contrast, one author of the study, Gazzaley, provides a cautionary note: “Video games shouldn’t now be seen as a guaranteed panacea”.

What exactly is this wondergame?

Neuroracer is a relatively simple 3D, first-person driver game, which requires participants to stay on the road while they respond to signs flashed up on a computer screen. Participants might be asked to complete either a ‘sign discrimination task’ without stepping behind the driver’s wheel, a second ‘drive only’ task, or to participate in a combined ‘multi-task’ condition. While Neuroracer lacks the graphics and acceleration of popular games like Forza or Gran Turismo, driving along the track at high speed levels still looks fairly challenging (see here for a high speed demo, and this WSJ interview displays clips of lead author Jose Anguera playing the game from 1:16-1:59).

Using this setup, Anguera et al. were able to have participants complete each of the individual and combined task conditions which they use to calculate a measure of performance decline or ‘multi-tasking cost’ associated with completing both individual tasks at once. In other words, how much did participants’ ability to correctly identify signs flashed up on the screen decline if they had to drive the Neuroracer car at the same time?

Multi-task performance declines with age

As a first proof-of-concept study, the researchers recruited 174 healthy participants (aged 20-79) for a single day of trials. There were about 30 participants per decade of life and (elderly) participants were screened for cognitive, psychiatric and motor deficits. Participants completed both individual versions of the game during a 30 min training session, and the difficulty of each task was varied to match individuals’ performance to approximately 80% accuracy.

In the critical multi-task trial, the researchers found that the multi-task cost increased linearly with increasing age (from, on average, ~26% cost for individuals in their 20s to >60% cost for individuals in their 70s), despite the fact that the older cohorts had large handicaps on either the ‘drive’ or ‘sign’ tasks.

Overall, this first trial provides good evidence that Neuroracer can be used to detect differences in multi-tasking performance associated with age.

 Neuroracer training

As part of their main study, the researchers also sought to investigate whether long-term training on Neuroracer could improve multi-task performance.

They recruited an additional cohort of 482 older adults (aged 60-85), who were screened on a large battery of tests. After screening, 60 total participants were randomized to each of three training conditions (multi-task, both single tasks, no training) and 46 participants performed well enough on the Neuroracer training tasks to be retained for the entire study. Participants were trained on the task at home, three hours per week for one month (12 hours total). Difficulty levels were adjusted to individual performance throughout training.

When performance was assessed at one month after the start of training, the researchers found that the handicap for each group had been eliminated and the multi-tasking cost had declined for the single-task training group (average ~40% cost), and had virtually disappeared in the multi-task training group (average ~10% cost). This level of performance was nearly as good, or better, than that for an untrained group of younger adults (ages 20-29, average cost ~24%). Moreover, at a six month follow-up test, this improved performance was maintained only in the multi-task training group.

This result presents pretty clear evidence that training on Neuroracer leads to improved performance on Neuroracer, which is pretty much expected. The only surprising finding here is that only the multi-task group sustained improvements, even though both training groups practiced the same tasks. This argues that there was a selective multi-task benefit as a result of this specific kind of training.

Neuroracer multi-task training alters brain activity

The researchers also used ERPs [http://en.wikipedia.org/wiki/Event-related_potential] to assess multi-task performance. The ERP measures were time-locked to moments when a sign was presented while participants were driving, and were used to assess theta power (brain wave magnitude) over the medial prefrontal (mPF) cortex as well as frontal-posterior (FP) theta coherence (correlated brain activity). Each of these activity measures has previously been related to cognitive control performance, with theta power believed to reflect reduced brain activation (i.e. possibly a marker of increased efficiency).

Before any training had occurred, elderly participants had lower levels of mPF theta power and FP theta coherence during Neuroracer multi-tasking than an untrained cohort of younger adults. In contrast, after one month of practice, both measures had improved; although this improvement was only significant for participants who had completed the multi-task training. Further, this improvement in MF theta power was correlated with a high Neuroracer performance at six months in the multi-task training group only (r = .76). MP coherence doesn’t appear to have correlated with 6 month performance.

Overall, this finding suggests that one-month improvements in Neuroracer multi-tasking performance were correlated with increased efficiency of processing while performing this task.

Does training generalize across tasks?

Of course, the bigger question here is not whether training can improve performance on the same task (a ubiquitous phenomenon), but whether training on one task can lead to performance improvements on another (a “transfer” of benefits). To test this question, the authors had participants complete a number of “cognitive control” tasks, before and after training. Six key tasks assessed working memory, attention, “dual-tasking” and interference from distraction, while two additional tasks assessed “speed of processing”, a more generic performance measure that was not expected to be specifically affected by multi-task training.

In support of task-general cognitive improvements, the authors found that the multi-task training group improved significantly more on 2 out of 6 “cognitive control tasks” (“test of variables of attention” or TOVA and a “working memory task”, or WMT), than either the single training or no training control groups. Other cognitive tasks assessing susceptibility to distractors, “dual-tasking” and attention did not show significant differences between groups. Additionally, performance on the TOVA was correlated with the change in MF theta power while playing Neuroracer in the multi-task training group on (r = .56). However, on the other hand, there was no relationship between TOVA performance and theta coherence, nor any relationship between WMT performance and any theta measure.

Overall, the authors find evidence that some tasks are enhanced following multi-task training only, but it isn’t entirely clear whether these “selective” enhancements reflect generalized cognitive improvement or simply improvement on cognitive control tasks that are most similar to the skill practiced while playing Neuroracer.

But does it actually work?

Overall, this is a pretty solid study. It looks as though it was well-designed and the researchers provide some pretty convincing evidence that Neuroracer can be used to document age-related differences in multi-tasking ability. The long-term study also provided reasonably strong evidence to suggest that practicing the game on an adaptive mode with progressively increasing difficulty levels leads to better performance on the game and changes in brain activity while playing it.

The bigger question here is whether these performance improvements affect older adults’ abilities on a wide variety of tasks—in other words, general cognitive enhancement. The data to support this interpretation are certainly suggestive, but a little mixed.

For one, it’s unclear that any of the brain measures of “efficiency of processing” during multi-tasking are good indicators of performance on the cognitive control tasks. There were two brain measures and 6 tasks, so that’s 2×6 = 12 possible correlations. The researchers found one correlation, so there’s not much evidence to suggest that the neural measures indicate general cognitive improvement. Instead, these measures are probably reflective of improvements in Neuroracer performance.

A second point is that while the authors clearly show an improvement in performance on two of the cognitive control tasks, they don’t report all of their data in the analysis, which focuses mostly on reaction time. It would have been nice to see accuracy data as well; even if this data was analyzed separately, an ideal practice would be to regress accuracy against reaction time to fully control for any speed/accuracy trade-offs. Moreover, the authors are a little sneaky in the main paper, where they analyze multiple “levels” of the TOVA and WMT in their main statistical model (most of which are significant), but ignore multiple levels of several other cognitive control tasks (none of which are significant). This is unlikely to have affected their significant interaction effect, but it could have affected their follow-up analyses and it certainly serves to “beef up” the appearance of their results.

Finally, the most important question is whether the results of this small, preliminary study will generalize to a larger, independent replication cohort. It is common for studies of this sort to report results that are robust, and then to later demonstrate that the evidence was weaker than believed.

There are a few reasons for this. For one, small samples tend to be highly variable, and so don’t give us a great picture of how things work in the wider population. Also, without assigning specific endpoints, there are literally dozens of “positive” outcomes to choose from (in this case, 6 tasks, with up to 15 difficulty levels, 2 outcome measures, etc). This study was actually pretty good in their analysis, but still far from perfect.

Moreover, while statistical significant tells us if what we were looking for is present (did Neuroracer enhance performance?), it doesn’t tell us how much. The authors of this paper report that the game has a big effect, mostly because they find “medium to large effect sizes (all cohen’s d’s: 0.50–1.0)”. However, it is well known that effect size estimates will be inflated when power is low (as tends to be true in studies with small sample sizes and in most neuroscience research).

Thus, some caution should be taken before interpreting these results to suggest that Neuroracer can provide a big boost in brain power. This study presents evidence that tasks such as this are “promising”, but it doesn’t provide evidence to truly answer this bigger question. Additional, well-powered, hypothesis-driven studies are necessary to tell us whether Neuroracer should serve as an ideal holiday present for your aging relatives.

 

 

 

Daniel Fishkin: Tinnitus Installation

Daniel Fishkin’s Tinnitus installation was just featured in the scientific journal Nature.  Read about it here (you need a subscription to read the entire article).

To view an installation comparable to the one featured in Nature, you can visit this page.  If you want to listen to an audio snippet, you can do so below.

I personally found the audio experience quite compelling: I accidentally hit play and was listening to it for a few minutes without realizing it–all the while wondering what was going on with my hearing.

 

 

Seizure Types

Welcome to My Blog!  This particular blog post is related to, or expands on, materials covered in my book: Biopsychology (9th Edition).

 

Several different seizure types were described in Chapter 10 of Biopsychology (9th Edition).  The purpose of this blog post is to provide you with one or more video examples of each seizure type.

Partial Seizures

Let’s begin with partial seizures.  If you recall, partial seizures are seizures that do not involve the entire brain.  There are two major categories of partial seizures: simple partial seizures and complex partial seizures. These two categories of partial seizures are illustrated in the videos within the following two subsections of this blog post.

 

Simple Partial Seizures

The little girl in the following video is displaying a simple partial seizure.  Her seizure involves unilateral contraction of the muscles in her face. You will notice in the video that she shows no indication of any disruption of consciousness.

 

Complex Partial Seizures

The girl in the following video is displaying a complex partial seizure.

 

As is the girl in this next video.

 

Generalized Seizures

If you recall from Chapter 1o, generalized seizures are seizures that involve the entire brain.  Chapter 10 described two major categories of generalized seizures: Tonic-clonic seizures and absence seizures.  These categories of generalized seizures are illustrated in the videos within the following two subsections.

 

Tonic-clonic Seizures

The woman in the following video displays a tonic-clonic seizure that is preceded by aura.  You will notice that, based on her experience of the aura, she warns the nursing staff of her impending seizure.

 

Seizures are not restricted to humans–they occur in many species.  The following video illustrates a tonic-clonic seizure in a rat.

 

Absence Seizures

In the following video, a little boy displays an absence seizure.  Notice the vacant look and fluttering eyelids.

 

The last video in this blog post is provided to illustrate that, even though we have seizure classifications that appear to be straightforward, many seizures do not neatly fit into one category or another.  Moreover, sometimes a particular seizure can start off resembling one type and then evolve into what appears to be another type  (e.g., a complex partial seizure might evolve into a tonic-clonic seizure).   The following video shows a little girl who is experiencing what appears to be a complex partial seizure with tonus in her left arm and hand.

 

 

Additional Resources

<coming soon>

References and Additional Readings

<coming soon>