#SfN13 Your coffee habit? Don’t fight it, embrace it.

Poster 236.10 The daily fix: habitual caffeine doses improve performance in attention and executive functions irrespective of food intake. JL Mariano, JCF Galduroz, S Pompeia. Univ. Federal De Sao Paulo, Brazil

87315_caffeine-coffee-foam-drink-chemistry-coffee-beans-structure-1920x1080-wallpaper_wallpaperswa.com_85

Coffee and I: 14+ years of history and counting.  Source: http://caravancoffee.com/

Here’s another reason to love coffee. Researchers from Brazil found that morning coffee consumption not only keeps you awake and alert, but also improves performance on cognitively demanding tasks. That is, if you’re already a habitual drinker.

People often start their day with a cuppa and breakfast. Carbohydrates in food gives the brains a shot of glucose, which is shown to positively affect cognition. Since caffeine also stimulates the brain, researchers wondered if stacking the two may lead to even better performance.

They recruited 58 young healthy coffee drinkers. After an overnight fast, half of the volunteers drank their usual cup (or two) of coffee, intaking 25-300mg of caffeine. The other half drank a placebo decaf. Some ate a cereal bar to chase the coffee, others didn’t. 30min later, researchers bombarded the volunteers with a series of mentally challenging tasks.

In one, volunteers tried to continuously call out numbers between 1 and 9 in a random fashion for 2 minutes straight – randomness and low repetition is the key. Here, volunteers had to actively inhibit our natural preference for number patterns (try it, it’s hard). In another test for  long term memory and recall, they named as many types of musical instruments or animals as possible in one minute. A whack-a-mole like game (hitting a button when a light comes on) tested their reaction speed while a zoo-navigating task tested for efficient goal-directed planning.

Surprisingly, food did not affect the performance on any of these tasks; but coffee certainly did. Coffee drinkers generated random numbers faster and repeated themselves less than controls. Their performance was also more stable within the task, consistent with less feelings of mental fatigue or weakness after the tests. However, these subjects were consistent coffee consumers, so it’s hard to say whether caffeine upped their smarts in a true nootropic-like manner, or if the placebo drinking group simply did worse due to caffeine withdrawal.

But for the average cup-a-day coffee consumer, if you’re in need of a quick boost in mental power (or avoid that nasty brain fog), it might help to go drink a cup. Scientists and lab techs, I’m looking at you.

 

PS Steve Miller, aka neuroscienceDC and fellow official #SfN13 blogger has a great post on the best time of day to drink coffee, check it out!

The straight dope on rational drug addicts

Crack, dope, ice…One hit, and you’re hooked for life.

tumblr_mhd9utibDW1qzpxx1o1_1280

Meth: one strike you’re out? Source: http://heisenbergchronicles.tumblr.com/

That’s what the war on drugs has been telling us for years. And for a while neuroscience seemed to back it up. Drugs of abuse stimulate our dopaminergic reward centers, causing a surge of dopamine efflux that changes synaptic transmission, “rewiring” the brain to create intense feelings of craving and drug-seeking behaviours. Lab rats hooked on cocaine will keep pressing a lever for another hit, eschewing food and rest until they die. Addicts beg and steal, enslaved to their drug of choice with a relapse rate as high as 97%.

But 80-90% of people who use methamphetamine and heroin don’t get addicted; not all ex-addicts relapse. In an unpopular series of studies, collectively called “Rat Park”, rats turned their noses at free-for-all morphine, preferring instead to socialize with their rat buddies in an enriched environment.

“Drugs have the power to rob us of our free will” – is this scientific fact, or social-politically construed caricature?

Hart, CL (2000) Alternative reinforcers differentially modify cocaine self-administration by humans. Behavioural Pharmacology. 2000; 11:87-91

The authors recruited 6 experienced crack cocaine smokers, and watched how they responded when offered a choice of between pharmaceutical grade cocaine versus a $5 monetary voucher, or a $5 merchandise voucher which can be used at local stores. In other words, they were offered drugs or an alternative award.

The volunteers were invited to stay at a Clinical Research Facility with TVs, radio and movies for entertainment. They had free access to cigarettes when not in session, but weren’t allowed “extra-curricular” doses of cocaine. At the start of each experimental session, researchers presented the addicts with a voucher indicating what the alternative award is. Addicts then pressed the spacebar a keyboard to “work” for a hit of cocaine, while blindfolded so that they couldn’t tell the dose.

In the subsequent trials, addicts had the freedom to choose to get the same dose of cocaine as the sample trial. But they were also offered an alternative reward: in the first four sessions, it was 5 bucks hard cash; in the last four, a voucher worth 5 bucks which they could trade for merchandise.

As you can see below, at lower cocaine doses (0 and 12mg), addicts choose to receive the voucher (black) or the money (white) more than half of the time. At higher doses though, addicts lusted after the cocaine hit 4-5 times out of the 5 trials.

Screen Shot 2013-09-18 at 9.13.48 PMWhen researchers pool all the data at various cocaine doses together, they found that out of the available 20 doses of cocaine, the addicts requested to smoke 2 doses LESS when cash was available compared to when merchandise vouchers were available. In other words, cash is a more competitive alternative reward. However, because the study did not include a condition where the participants smoked cocaine without the availability of either voucher, it’s impossible to say in absolute terms how much either the $5 or voucher decreased cocaine self-administration.

All the users in the study were kept abstinent except during the trial, except for that one tease at the start of each session. According to popular beliefs, that should have triggered insane cocaine cravings and driven them to choose the drug in subsequent trials regardless of dosage. When given an alternate to cocaine, the addicts were capable of deciding that a low dose wasn’t “worth it”. They made a rational choice. Presumably the effect size would’ve been larger if the monetary reward was higher  – this was indeed the case in a follow-up study with meth addicts, when the monetary reward was upped to $20.

The study is not without its faults. First, it suffers all the problems of small sample size, especially generalizability. Second, the addicts had to work for cocaine, while the alternative reward was readily available (albeit one can argue whether pressing a space bar several times can be counted as “work”). I would’ve loved to know what would’ve happened if they were offered the money first, then given the choice to keep it or spend it on a hit in the lab? And what did the addicts DO with the money after the study – did they use it to buy more drugs?

Nonetheless, the author of this study stresses that neuroscience has a lot to loose by caricaturizing addiction as the “one-hit you’re done” boogeyman – it essentially takes out all social-economic factors, and solely focuses on the drug’s pharmacology. This is understandable in one sense – studying drugs in a sterile lab out of context is simple – but perhaps a more useful approach is to understand why some, but not others, get hooked for life. The personal traits and environmental influences that bias someone towards drug addiction are mainly still unknown, though individual differences in cognitive control and early-life stressors definitely play a role.

As Mind Hacks eloquently puts it:

“Nonetheless the research does demonstrate that the standard ‘exposure model’ of addiction is woefully incomplete. It takes far more than the simple experience of a drug – even drugs as powerful as cocaine and heroin – to make you an addict. The alternatives you have to drug use, which will be influenced by your social and physical environment, play important roles as well as the brute pleasure delivered via the chemical assault on your reward circuits.”

Head over there if you’d like to read more about Rat Park and the complexity of addiction.

Hat tip to @Scicurious for news on the lead author of this study, Dr. Carl Hart, who has a book out on the topic – looks like an interesting read.

ResearchBlogging.org
Hart CL, Haney M, Foltin RW, & Fischman MW (2000). Alternative reinforcers differentially modify cocaine self-administration by humans. Behavioural pharmacology, 11 (1), 87-91 PMID: 10821213

Oxytocin: decreasing cacophony in the brain

TLDR: Oxytocin boosts signal and dampens background buzz in the hippocampus. And probably other brainy parts. How does this relate to its functions in social interactions? Your guess is as good as mine. 

Ah oxytocin, somehow I just can’t seem to escape you. The media darling “love” (or not) hormone is back in the news, but in a slightly different way. Previous reports often focus on how it influences behaviour. Now a new paper is offering us an enthralling answer to a more basic but crucial question: in terms of neurophysiology, what the hell is it doing in the brain?

Scott F Owen et al (2013) Oxytocin enhances hippocampal spike transmission by modulating fast-spiking interneurons. Nature doi:10.1038/nature12330

oxytocin_space_fish

Oxytocin, jammin’ it up or droppin’ it down? Source: http://www.rockband.com/

Insert oxytocin primer: it is a hormone-neurotransmitter released from a part of the brain called the hypothalamus. As a hormone, it circulates the body and promotes child birth; as a neurotransmitter, it acts on many different parts of the brain by binding to its receptor, and is linked to intimacy, mate bonding, facial recognition, social interaction and –on the flip side – anxiety and fear. It’s a truly “dirty” aka non-specific drug.

A key clue stems from a 30-year-old finding, in which researchers found that oxytocin increases inhibitory signalling in the hippocampus, a brain region important in aspects of learning and memory. These inhibitory neurons release GABA, a neurotransmitter that dampens the activity of nearby excitatory pyramidal cells. How this relates to oxytocin’s ability to improve information processing in the brain on the circuit level was left unexplored.

To tackle this question, researchers eavesdropped on the electrical chattering in neurons residing in a small part of the hippocampus called CA1. Using electrophysiology to measure electrical activity in neurons, they specifically looked at the flow of information through this region. As shown in the graph below, when researchers used tiny electrodes to stimulate axons called Schaffer Collaterals (“wires” through which electrical signals/information travels) leading to CA1 neurons, it activates both excitatory pyramidal cells (blue) and adjacent small inhibitory cells (purple and green), the latter of which synapses onto pyramidal cells (blue) to lower their excitation.

Screen Shot 2013-08-09 at 11.09.49 AM

PV and SOM are two types of inhibitory neurons. There are a lot more types. Info transfer is SC->inhibitory; SC->excitatory; inhibitory ->excitatory

Due to their membrane properties, many neurons are inherently “noisy”, often spontaneously firing without external stimuli. Picking out actual information then is like trying to chat with a specific person at a crowded cocktail party: the signal (the other person’s voice) needs to be strong, and the background noise weak. Researchers first used a microelectrode to zap axons leading to CA1 to create artificial “information” (the black spike diagramed above).

Screen Shot 2013-08-09 at 11.21.15 AMAs you can see from the left graph above, when they concurrently used an oxytocin-like molecule (TGOT) to stimulate oxytocin receptors, excitatory neurons had a much higher probability of transferring the signal (the spike) to the next neuron in a more timely fashion (compare TGOT to baseline, spike probability is increased). In other words, it’s as if the neuron became a better relayer, so that its downstream receiver had a higher chance of getting the information, increasing fidelity.

On the contrary, as shown in the right graph, spontaneous activity (baseline spikes) of these excitatory neurons decreased. On the circuit level, this is like turning on noise-cancelling headphones: things that you want to hear become clearer, while ambient chatter fades away.

How is this happening? Using various chemical blockers and electrophysiology, researchers found more and stronger spontaneous firing in a type of inhibitory neurons called Fast-Spiking Interneurons. This increases overall inhibitory tone in the circuit, which may partially account for a decrease in random background noise.

But if inhibitory neurons are firing MORE after oxytocin, shouldn’t they also inhibit the signal that’s getting passed along in pyramidal cells? Nope. Fast-Spiking Interneurons can fire either spontaneously or when information flows through (called “evoked”). With a bit more electrophysiology sleuthing, researchers found that although oxytocin increased spontaneous firing of these interneurons, it actually decreased information-stimulated/evoked firing. Since interneurons synapse onto neighbouring excitatory cells, this results in less inhibition of the excitatory cells and a stronger signal.

The TLDR then, is that oxytocin strengthens information transfer and decreases spontaneous background noise in the hippocampus. Okay, so what? This sharpening of information processing in brain circuits may go awry in conditions like autism. Previous research has found that children with autism have lower oxytocin levels in the bloodstream, and in some cases, mutations in the oxytocin receptor. Lower oxytocin could result in poorer signal/noise processing. Although this study links oxytocin signalling to circuit difficulties, the jury is still out there about its potential therapeutic implications, if any.

Beyond autism, oxytocin is hypothesized to increase saliency of social interactions in the “neurotypical” as well. That is, it focuses our attention to different contexts and cues, making them more noteworthy. While a long stretch, it’s still intriguing to wonder if oxytocin’s enhancement of signal-to-noise ratio may underlie this “spotlight” effect. If so, those with autism may find social interactions nerve wracking because – on the circuitry level – they can’t separate pertinent signals from all the background cacophony.

ResearchBlogging.org
Owen SF, Tuncdemir SN, Bader PL, Tirko NN, Fishell G, & Tsien RW (2013). Oxytocin enhances hippocampal spike transmission by modulating fast-spiking interneurons. Nature PMID: 23913275

Studying for LSAT alters connectivity in the brain’s reasoning circuits

(… Or as some will say, “makes you smarter”. Hmmm…)

photo (12)

View from where I’m sitting right now. Fish oil, magnesium, water and LSAT.

The brain never stops changing and adjusting – in structure and connectivity – to new learning and life experiences. Think about the last time you learned a new motor skill. What began as difficult and awkward movements slowly transformed into something automated that you don’t even need to think about. Thank your brain’s malleability (aka “neuroplasticity” in neuro-lingo) for that.

To some, the fact that your brain changes in response to motor learning may seem almost, well, intuitive. What about more abstract types of learning? Something central to intelligence, like the ability to use logic? How would learning to reason change the brain?

Mackey et al (2013) Intensive reasoning training alters patterns of brain connectivity at rest. Journal of Neuroscience, 33(11): 4796-4903

Mackey et al (2012) Experience-dependent plasticity in white matter microstructure: reasoning training alters structural connectivity. Frontiers in Neuroanatomy, 6 (32): 1-9

In search of a real-world setting, researchers turned to LSAT participants. Since LSAT determines the caliber of law school students can get into, researchers reasoned that they would be more motivated to study for the LSAT than a similar laboratory task. For those unfamiliar with the law school admission test, it evaluates a person’s ability to identify logical fallacies, strengthen or weaken arguments, identify assumptions and reading comprehension.

Researchers recruited 25 prelaw students enrolled in an LSAT course and imaged their brains at rest with fMRI both before and after 90 days of intensive reasoning training. As controls, researchers also include 24 age- and IQ-matched prelaw students intending to take LSAT in the future. The “resting state of the brain” is actually a misnomer. The brain is never completely silent; spontaneous, synchronized fluctuations in blood oxygenation levels occur even when the person is not actively “thinking” about a task. Resting-state fMRI tries to find correlations in the activity patterns of different brain parts, which can tell us how functionally connected these areas are. When connected, the activity spreads like “waves” from one brain area to the other.

Not surprisingly, the scores of most LSAT-takers improved after training, especially in the reasoning domain (not so much reading comprehension). At the neural level, researchers found significantly increased connectivity between the parietal and prefrontal cortices and the striatum, both within the left hemisphere and across hemispheres.

1-s2.0-S0006322311000795-gr1

Left image, purple and indigo: Prefrontal cortex; left image, light blue: parietal cortex; Right image, yellow: striatum et al (aka “midbrain”)

What do these brain regions DO? In one theory, strictly with regards to reasoning, the parietal cortex acts to keep in mind individual relations between tidbits of information (A->B, B->C), and the prefrontal cortex compares or integrates the available information to reach a conclusion (A->B->C so A->C!). The striatum -best known for its involvement in motor skill learning and addiction–encodes errors in reward prediction (oh, why didn’t I get this right?) and helps with flexible problem solving (maybe I should change strategies). Note that these are broad generalizations, and each brain region is involved in many, many more functions.

Overall, it seems that intensive and targeted logic training increased synchronized activity between different parts of the reasoning circuit, both within the left-brain and between hemispheres. Many people think that reasoning is largely left-brain dominant – although this is a gross simplification – as similar regions in the right-brain can also be recruited to support complex reasoning.

Screen Shot 2013-07-21 at 5.12.43 PM

Thicker lines means more increase in connection. L: left-brain, R: right-brain.
PFC: Prefrontal cortex, Str: striatum, Parietal: well, you know.

The researchers next asked which connections were most related to improvement on the LSAT. Looking across 231 different connections, the increase in activity coupling between the left parietal and right prefrontal cortex, as shown above, most strongly correlated with increased performance. The data also suggested that how much a student’s test score improved correlated with how much change went on in their reasoning circuit – however, this correlation did not survive a more stringent statistical test.

To strengthen their findings even further, researchers used diffusion tensor imagining (DTI) to look at white matter microstructure in the brain before and after LSAT preparation. DTI measures how water diffuses through neural tracts and gives info on the integrity and (to some extent) speed of neural transmissions. As with fMRI, the largest changes were found within the parietal cortex and between the right parietal and left prefrontal cortices.

In sum, 3 months’ practice of reasoning skills is enough to change connectivity within the brain, and this change may correlate with how well a person improves in his/her test score. Given that the study looked at a real-world scenario, it’s pretty incredible that the authors found consistent and significant changes in a cohort of only 25 people. It’s cool that the authors looked at resting state connectivity as opposed to brain activity during a reasoning task. This way, they show fundamental changes in the brain at baseline.

However, I would like to know if the change in brain connectivity (“strengthening” of reasoning circuits) actually translates to better performance on a new reasoning task. For example, can “priming” the brain with logic games result in quicker learning of chess? I would also love to know how long these changes last – given the fact that a delay between studying and taking the exam can drastically decrease test scores, I’m guessing that these connectivity changes aren’t there to stay. You’d probably have to regularly flex those parietal-prefrontal muscles to maintain high reasoning abilities.

I’d like to leave you with this question: given the findings of this study, do you think LSAT scores measure a person’s cognitive potential (how well they can reason given the training), or do they simply reflect his or her cognitive past (how much they’ve reasoned previously)?

For my housemate JL, who ends up herp-derped every night due to intensive LSAT cramming.

Note: This post generated quite a lot of discussion on reddit/r/neuro. The user Carcel in particular outlined the limitations of this type of “real-world” fMRI imaging research astutely. I think he/she is spot on, and I’ll share the comments with you below.

I was really just critiquing the article, but honestly I’m not a huge fan of this genre of research either. I might catch some heat for this, but the “pick anything” and scan it method has always seemed somewhat scattershot to me. As such it has all the strengths and weaknesses of an exploratory approach. Sometimes you find something you didn’t expect, e.g. chess grandmasters have recruited parts of the fusiform face area to analyze chess board states, but often you are just casting too wide a net.

Don’t get me wrong, tools such as fMRI are huge advances, but there is a tendency to ignore their limitations. Research like this seems to come with so many caveats that their conclusions just aren’t all that conclusive.

Between the relatively low spatial resolution and poor understand of functional localization there are a lot of claims being thrown around about “emotion circuits” and “reasoning circuits” and “LSAT circuits” that are really unsubstantiated. Worse yet, when you pick a complex task such as LSAT studying, or any other “normal activity”, there is no assurance that every participant’s experience has been the same.

So to sum it up, no there is nothing inherently wrong with this research as long as we are moderate with conclusions and acknowledge the potential errors. The more specific the choice of task and more well understood the regions they are focusing on the better the research is likely to be, but even then there are significant issues that need to be considered.

 

Mental exercise wards of dementia in old age

muscle_brain

Pump it up! Source: http://4.bp.blogspot.com/

We often hear that the brain is like a muscle – use it or loose it. This piece of common wisdom isn’t just an old wives’ tale, but grounded in science: a new study suggests that exercising the brain at any age may help you keep your wits in old age, even if telltale signs of the physical damage in the brain are present.

Previous research tells us that engaging in mentally stimulating activities, like reading and writing, slows down and reduces late-life cognitive decline. But how? One theory is that working out your brain muscles somehow creates a “reserve”, buffering against age-related cognitive decline. So remaining intellectually agile is a direct outcome of mental exercise. Another idea is that aging-related tear and wear is the CAUSE of cognitive decline. In other words, loosing mental wittiness is a direct consequence of physical damage. Now the question is: can a book a day preserve cognition in old age, even in lieu of telltale signs of dementia-related brain damage?

Wilson RS et al. 2013 Life-span cognitive activity, neuropathologic burden, and cognitive aging. Neurology.  doi: 10.1212/WNL.0b013e31829c5e8a

Researchers asked roughly 300 elderly participants about how much they engaged in cognitive pursuits in early and late stages of their lives. The activities queried had very low barriers to entry – stuff like reading books, writing letters, visiting a library to look up information – as opposed to more niche brain training exercise programs like Luminosity. The participants then took 19 (!!) standardized tests annually measuring all aspects of their cognition until they passed away, for an average of roughly 6 years. Following death, their brains were examined for neurological abnormalities, such as mirco-strokes, Alzheimer’s disease-related plagues and tangles (pictured below) and Parkinson’s disease-related clumps called Lewy bodies.

5218555221_ba4b7dfc96

Neuropathology in Alzheimer’s disease. Source: http://farm5.static.flickr.com/

Unsurprisingly (and unfortunately), 153 people showed clinical signs of cognitive impairment during the follow up sessions, and roughly 68% of participants had visible signs of neuropathic damage at death. It seems reasonable to assume that different amounts of brain damage led to different degrees of cognitive impairment. However, when researchers tested this hypothesis with a mathematical model, these marks of brain damage couldn’t fully explain why people had different rates of cognitive decline.

Researchers then looked at the effect of late-life mental exercise on cognition. As you can see from the graph below, compared to average late-life activity levels (blue line, 50th percentile), frequently reading and/or writing (10th percentile) slowed down mental aging by roughly ~30% (green line), while mental inactivity (90th percentile) sped it up by nearly 49%. This means that not working your brain in late life can increase cognitive decline by almost 1 ½ times. Yikes!

Screen Shot 2013-07-19 at 1.47.46 PM

Further analysis showed that frequent cognitive activity in youth and early adulthood (6 -40 years old) also helped delay memory loss and thinking impairments. When researchers looked at early and late life activity together, both were related to change in cognition, but differed in WHICH aspect they affected most. Early-life activity was related to changes in working memory – the ability to hold multiple pieces of information in mind, using them to reason things out or plan the next step of action. Late-life activity, on the other hand, was related not just to working memory, but also to plain-old-“oh here’re my keys” memory and visualspatial ability, ie to mentally manipulate a static 2D-map to find your way home. The benefits of cognitive activity didn’t go away with brain damage: even though the brain may show pathological signs of Alzheimer’s or Parkinson’s, intellectual pursuits still slowed down cognitive decline.

Why does engaging the brain preserve its function in later life? Although it seems like an “oh-duh” question, scientists haven’t pinpointed the reason(s) quite yet. Cognitive activity is associated with changes in the brain’s volume, structure and connectivity, all of which may protect brain regions associated with cognitive functions in later life. As of now, we still know very little about what goes on at the neural circuit and neuronal levels.

Regardless of mechanism, this study is in line with many others that show a lifetime of frequent mental engagement wards off senior moments later in life. You don’t have to be a bookworm or a writer to reap the benefits – learning a second language helps as well.

Last note: The paper reported how often the participants were reading/writing (for the 90th percentile, almost everyday of their lives). It did not say a word about WHAT they were reading, so I’m assuming fiction and nonfiction are fair game. As my lab mate just pondered out loud “I wonder if Fifty Shades of Grey counts too…”

ResearchBlogging.org
Wilson RS, Boyle PA, Yu L, Barnes LL, Schneider JA, & Bennett DA (2013). Life-span cognitive activity, neuropathologic burden, and cognitive aging. Neurology PMID: 23825173

“Cocaine addiction may be cured by Ritalin!” …Hmm, really?

I saw this headline from an r/science thread the other day, and had to look into it. As you’ve probably guessed, the answer is no, but the idea behind it is still a fascinating story.

Lots of people experiment with drugs; only some become addicted. Many addicts try to quit; only some succeed without relapse. Why?

Screen Shot 2013-07-01 at 7.10.42 PM

Anatomy time!! Source: Baler R & Volkow ND. (2006) Drug addiction: the neurobiology of disrupted self-control. Trends in Molecular Medicine 12(12) 559

One idea is that people have a different baseline of self-control. The decision to take (“go”) or not take (“no go”) a drug comes from the orbitofrontal cortex (OFC – green in the graph above), a part of the frontal lobe in charge of thinking through a decision making process. The OFC gets its info from two regions: the nucleus accumbens (NAcc – red), which learns about rewards, and the prefrontal cortex (PFC) and Anterior Cingulate Cortex (ACC -blue), which inhibits impulse and restrains craving. (The hippocampus and amygdala, in purple, are important for drug-reward learning and memory.)

Screen Shot 2013-07-01 at 7.10.49 PM

In a non-addict, the ACC-PFC (blue) usually wins out, telling the “judge” OFC to deny the motion. You don’t take the drug. However, in some vulnerable individuals the NAcc (red) wins out, and every time they take a drug – say, cocaine – it changes this pleasure-sensing, reward-predicting part of your brain, increasing its sensitivity to both the drug and its associated cues. At the same time, it also weakens inhibitory control, which skews the OFC towards a “go” decision. This breakdown of self-control clearly sets up the stage for unrestrained cycles that eventually results in compulsive drug taking.

So what if we can use a drug to bring back cognitive control and reset the circuitry? Would that treat addiction?

Here’s where Ritalin comes in. Like cocaine, Ritalin (methylphenidate) is a stimulant that increases dopamine level, just at a much slower pace with longer duration. Since there’s no spike in dopamine, there’s no rush. So, just like using the longer-lasting methadone to wean opiate addicts off heroin, it seems reasonable that Ritalin could be a used as a cocaine substitute on the route of recovery. However, Ritalin packs a one-two punch (otherwise it would be a “oh-duh!” story). As a medication for ADHD, one of its major effects is to strengthen cognitive control. It works especially well in this regard in people who have lower baselines of cognitive inhibition to begin with, like drug addicts. Put the two together, and Ritalin seems like the perfect candidate for battling cocaine addiction.

Problem is, on the behavioural level, it doesn’t work. Double-blind studies show that users didn’t report lower cravings in response to cocaine-associated cues, nor did they lower their drug use or relapse rates. However, these results are plagued by the curses of small sample size and high dropout rates, so researchers aren’t ready to throw Ritalin out the window just yet. Unfazed, they decided to directly peek into the brain with fMRI, to see if Ritalin has a more profound effect at the neural level.

In a 2010 study, researchers recruited 13 cocaine addicts (~18 years use) and 14 controls, stuck them in an fMRI while they completed a task. Here’s how it went: the volunteers were shown two types of words, either neutral or drug-related. All the words were in colour, and they had to press a button corresponding to the colour as fast as they could.

Screen Shot 2013-07-01 at 7.25.00 PM

Under placebo (blue circle, left graph), the coke-abuser’s ACC (which controls inhibition) showed very low response to drug-related words; when given Ritalin (red circle), the ACC’s response shot up to that higher than controls’ (purple arrow). In a sense, Ritalin re-sensitized the coke addicts’ control center to drug-related cues. In terms of task performance, Ritalin also decreased impulsivity, evidenced by the lower number of errors they made (yellow arrow, right graph) – but the same also happed to healthy controls when given Ritalin. In fact, the addicts didn’t significantly perform worse than the controls, even with ACC hypo-activation. So under the hood, Ritalin seems to be strengthening cognitive control – it’s just not reflected in behaviour.

Now in a new study, the researchers wanted to know if Ritalin can change brain connectivity under a resting state. “Resting state” is quite the oxymoron, as the brain never shuts down completely. Instead, it exhibits spontaneous fluctuations in neural activity between brain regions, which also goes awry in cocaine addiction. Researchers recruited 18 volunteers who fit the criteria for cocaine addiction, but were otherwise healthy and not taking any medications. The volunteers when then given either placebo or Ritalin (20mg) and had their brain imaged. Here’re the findings: compared to placebo, Ritalin normalized the strength of 6 connectivity pathways related to emotional regulation, memory formation, craving suppression and inhibitory control. Ritalin had a similar degree of effect on all volunteers, regardless of how severe their cocaine addiction is. In this study, the researchers didn’t check for subjective feelings of craving after Ritalin administration.

So what’s the verdict? Can Ritalin help cocaine addiction? The evidence really isn’t strong. Nevertheless, it’s interesting that one dose of the “cognitive enhancer” can rectify some of the neural connectivity problems seen in addiction. In all honesty, I would be surprised if one dose of Ritalin can “treat” cocaine addiction, in terms of decreasing craving and drug-seeking. After all, the addiction to drugs of abuse doesn’t happen in a day with one dose either. In future studies, it would be interesting to see if multiple treatments with Ritalin, over a long period of time, can exert a behavioural effect in addition to the neural one. It would also be interesting to test the effects of Ritalin and cognitive-behavioural therapy (CBT), and see if this combo is stronger than CBT alone.

The idea of using cognitive enhancers for addiction therapy is gaining steam. Clinical trials with AdderallRitalin and Modafinil are all ongoing, and hopefully, larger studies with longer timeframes will give us a more conclusive result.

What do you guys think? Are scientists beating a dead horse, or is there actually something worth pursuing?

ResearchBlogging.org
Konova AB, Moeller SJ, Tomasi D, Volkow ND, & Goldstein RZ (2013). Effects of Methylphenidate on Resting-State Functional Connectivity of the Mesocorticolimbic Dopamine Pathways in Cocaine Addiction. JAMA psychiatry (Chicago, Ill.), 1-11 PMID: 23803700

ResearchBlogging.org
Goldstein RZ, & Volkow ND (2011). Oral methylphenidate normalizes cingulate activity and decreases impulsivity in cocaine addiction during an emotionally salient cognitive task. Neuropsychopharmacology : official publication of the American College of Neuropsychopharmacology, 36 (1), 366-7 PMID: 21116260

Wanna learn a second language? Ditch that familiar face.

Have you ever felt like you behave differently depending on your cultural surroundings? As an immigrant, I know I start mimicking others’ accents and body language once I’m out of my heritage culture. This type of environment-induced chameleon-like morphing is called – quite aptly – “frame switching” in psychology. Scientists don’t really know if it happens automatically or controlled, but the effects are quite powerful – frame switching doesn’t stop at your outwards behaviors, it also impacts how you think about a problem and subsequent judgments and decisions.

cultural-difference-2

Culture shock much? Source: http://daileytravelservice.com/

In many cases frame switching is helpful – you assimilate faster to a new environment, tailoring your thinking to that of the other people around you. For example, in the Prisoner’s Dilemma task, where a person chooses to either cooperate or defect, Chinese Americans switched strategies adaptively: when “primed” with Chinese symbols like dragons or yin-yang, they preferred to cooperate (in line with Chinese ideals of harmony); when shown pictures of superman and the Statue of Liberty, they tended to act like western economists and defect more. Culture cues are like magnets that attract and activate a network of “thinking” that we associate with a culture – this is great, since we better fit in.

…Right?

If Chinese icons can lead to a “Chinese” way of thinking, could it also induce a tendency to speak Chinese? For Chinese immigrants living in Chinatown, could this then impair their ability to learn English?

Shu Zhang et al (2013). Heritage-culture images disrupt immigrants’ second-language processing through triggering first-language interference. PNAS early edition. doi:10.1073/pnas.1304435110.

To test this hypothesis, researchers recruited Chinese students who’ve been attending university in the US for roughly a year and sat them down in front of a computer showing either a Caucasian or Chinese male called “Michael Lee”. Michael “spoke” to the volunteers through an audio recording in a standard American English accent. The volunteers were then asked to converse with Michael about campus life while their speech was recorded. To assess English fluency, the researchers had two separate listeners rate the recordings and they also objectively counted words produced per minute after weeding out the “uhs” and “ahhs”. Ironically, as seen below, even though volunteers preferred to chat to the Chinese version of Michael, their English fluency significantly dropped by more than 10% on average, as did their speech rate.

Screen Shot 2013-06-19 at 6.18.20 PM

Fig 1. Looking at a Chinese face decreases English proficiency in Chinese Immigrants. That’s quite the drop!

But maybe the volunteers felt obligated to talk to a Caucasian face in English who would otherwise not understand them. To rule out motivation as a cause of the performance gap, researchers showed volunteers five icons of Chinese (Great Wall) or American (Mount Rushmore) culture and asked them to describe the icons in English. Volunteers then had to make up a story about a culture-neutral image, also in English. Once again, English fluency and speech rate both tanked by 10-20% when describing culture-laden images (left column). Even worse, the volunteers also had a more difficult time telling an English story about the culture-neutral item when primed with pictures of the Great Wall (second left column).

Screen Shot 2013-06-19 at 6.52.06 PM

Why is this happening? One possibility is that Chinese cues prime the immigrants to think in the Chinese lexicon. Instead of thinking of a pistachio as “pistachio”, for example, they might tend to name it as a “happy nut”, which is the literal translation of the nut’s Chinese name. To test this hypothesis, volunteers were again primed with culture-laden images. They were then asked to identify the literal-translation names (“happy nut”, “cotton stick”, “flying dish”) of a series of objects. Just like the researchers thought, when primed with Chinese rather than American icons, volunteers took much less time to identify those names. Showing culture-neutral images at the beginning did not affect literal-name identification at all. Finally, volunteers also tended to call objects by their literal translation name when primed with Chinese icons. Priming with American icons, on the other hand, did not help them increase their English proficiency.

pistachiochinesekaixinguo

Pistachio or happy nut? Source: http://api.ning.com/

 This study shows just how exquisitely we are attuned to culture context, in that even seeing a symbol of a heritage culture can pull us back into our old linguistic structure and interfere with new language learning. As someone who grew up in multiple countries speaking multiple languages, I can totally relate to the feeling of frame switching and language/cultural priming. I do wish the authors included analysis of how many “extraneous” words the volunteers used, including the stutters, repetitions and self-corrections rather than ruling them out. I’d also like to do this test on Chinese-Americans who learned Chinese and English simultaneously in an American culture – would Chinese icons decrease English and/or increase Chinese proficiency?

Finally, this study suggests that the best way to learn a second language is immersion learning. If you want to assimilate, don’t move into an ethnic pocket where you’ll be surrounded by people from your homeland. You might feel more comfortable talking to them – but it won’t help your new language learning.

For all the multilinguals and multicultural people out there, have you ever noticed this cultural priming effect? Has it influenced your ability to speak a language?

ResearchBlogging.org
Zhang S, Morris MW, Cheng CY, & Yap AJ (2013). Heritage-culture images disrupt immigrants’ second-language processing through triggering first-language interference. Proceedings of the National Academy of Sciences of the United States of America PMID: 23776218