Just a short note to let everyone know that Neurorexia is in the process of moving to a new host (with a face change)! If odd things happen in the next day or so please drop me a note in the comment section or email me.
And a teaser :p
Poster: 671.Learning and Memory: Genes, Signalling and Neurogenesis II.
There’s no doubt that aerobic exercise benefits the brain. Running, for example, reduces anxiety, improves sleep quality, boosts learning of a new task and maintains spatial memory*. Many of these mental perks stem from an increase in adult neurogenesis; that is, the birth of new neurons in the hippocampus and the olfactory bulb. (*That is, if rats run before new learning. See here for more.)
Yet perhaps the most apparent health benefit of running is increased cardiovascular and lung function. As any runner can attest to, an initially exhausting 10k soon becomes a breeze – you’ve increased your aerobic capacity. This led researchers from Duke University to wonder: is improving exercise capacity –by whatever means – necessary and sufficient to boost neurocognitive function?
Better bodies, better minds
Just like us humans, rats have an innate sensitivity to the effects of exercise. After the same 8-week running regime, high-response rats drastically increased their maximal capable running distance (~75%), while low-response rats barely improved (~22%). Surprisingly, compared to their sedentary peers, only high-response rats showed elevated neurogenesis in the dendate gyrus, a subregion of the hippocampus, as compared to their sedentary peers.
One hypothesized function of the dentate gyrus is pattern separation, or VERY simply put the discrimination between two very similar spatial contexts or things (Jason Snyder of Functional Neurogenesis fame has a great blog post on the matter). Researchers decided to challenge these rats with two Lego pyramids that only differed in the colour of their tops – imagine two Christmas trees with either a yellow or orange star. After the rats familiarized themselves with the yellow-topped Lego, researchers waited a minute before presenting them with both. High-response runners (but not their sedentary controls) instantly realized something was up – they approached and sniffed the new construct in earnest, ignoring the old familiar one.
Low-response runners, on the other hand, behaved just like their sitting peers, spending a similar amount of time with both objects. Low-responders had no problem with their memory; when faced with a mug and a can, they could easily discriminate between the two. They just couldn’t pick out minute differences in the Lego pieces, a skill often attributed to enhanced neurogenesis.
These data, perhaps somewhat dishearteningly, suggest that running doesn’t always boost brainpower – neurocognitive benefits only occur in tandem with improvements in aerobic fitness, as measured by total running distance until exhaustion. These results parallel that of a human study, in which increased lung capacity after training correlated with better performance on a modified pattern separation task (although understandably they did not show enhanced adult neurogenesis, so it’s hard to attribute behavioural output to increased new neurons per se).
Running-improved aerobic capacity seems to be the crux to exercise-induced brain benefits. But is running really needed? To explore this idea further, researchers decided to take treadmills out of the equation and focus on genetic differences in aerobic fitness.
Innate aerobic capacity accounts for cognitive benefits
Allow me to introduce to you the low and high capacity runners. Selectively bred for their capability (or not) to “go the distance”, these rats differ up to 3 times in a long-distance standard fitness test, without ever setting foot on a treadmill. At 10 months old, they also had a two-fold difference in the total number of newborn neurons in the dentate gyrus as a result of increased neuron survival, which increased to three-fold at 18 months old.
Researchers took sedentary rats from both groups and challenged them to the Lego task described above. High capacity runners significantly outperformed their low capacity peers, expertly telling apart the Lego constructs. Similarly, in an object placement task in which researchers minutely moved one of two objects, low capacity runners could not identify the moved one after an hour’s delay, though they managed if the wait was only a minute. High capacity runners, on the other hand, excelled in both cases.
These results argue that high aerobic capacity in and of itself promotes pattern separation. But what if, unbeknownst to researchers, high capacity runners were maniacally jumping around everyday in their home cages? A few days of stealthy observation proved this wrong; paradoxically, low – compared to high- capacity runners were much more hyperactive. They also seemed more outgoing in a social interaction test, and exhibited a lower tendency to generalize trained fear from one context to another.
Running-induced neurogenesis is generally considered to ease anxiety. So why do high capacity runners (with higher rates of neurogenesis) seem more neurotic?
Born to laze, born to run
Running is physiologically stressful in that it increases the level of corticosterone (CORT), a stress-response hormone. Unlike chronic stress that continuously elevates CORT, running only induces a transient, benign increase that quickly returns to baseline after recovery.
Researchers trained low- and high- capacity rats on treadmill running 5 days a week for a month. By the end, both groups showed increased running capacity, though trained low-capacity rats were only as good as untrained high-capacity ones (life’s unfair!). However, their acute stress responses drastically differed in a running-stress test.
Untrained low-capacity rats remained calm throughout the test, as measured by unchanging CORT levels. “They waddled on the treadmill for a bit, got tired and gave up.” said the researcher, “so they really weren’t that stressed out.” Trained low-capacity rats however hated the treadmill – their CORT shot through the roof. “You’re chronically forcing them to do something they’re terrible at, of course they’re going to be stressed out” explained the researcher, “and once they’re done, their CORT goes back to normal.” (I’m paraphrasing.) While this scenario is certainly possible, an alternative explanation is that only trained low-capacity rats were able to exercise to the point to induce a normal elevation in CORT levels; untrained rats simply don’t workout hard enough.
Intriguingly, untrained high-capacity rats had elevated levels of CORT during the running test, while previous training eliminated this response. Why? Researchers believe that chronic running habituated them to the stressor: “You know when you have this itch to run? You get stressed out when you can’t, and feel relieved when you finally do exercise.” In other words, these rats were “born to run”.
On the cellular level, running did not significantly increase neurogenesis in the ventral hippocampus in either low- or high-capacity rats, which I find rather surprising. Finally, high-capacity rats (compared to low) had less Mmneralocorticoid receptor (MR) and glucocorticoid receptor (GR) in the amygdala and hypothalamus, but not in the hippocampus. This is also surprising, as MRs and GRs in the hippocampus are crucial for negative feedback to the stress response axis (below).
Taken together, these data point to increased aerobic fitness– through genetic means or exercise- as the key to enhancing neurocognitive function in rats. Inbred differences in aerobic fitness may alter how one responds to exercise (and perhaps other types of) stress.
These studies beg the question: what if we could artificially mimic the effects of exercise (pharmaceutically or otherwise) and reap its benefits? While “exercise pills” may not necessarily benefit healthy individuals, they could potentially improve both physical and hippocampal health of the elderly or the disabled.
PS. This is the end of #SfN13 blogging. It’s been hectic, a bit overwhelming and a LOT of fun!! Thank you to all the presenters for your patience & feedback and the PIs who let me write about your work. Thank YOU for reading!
Regular research blogging will resume soon. Stay tuned!
671.02. KM Andrejko et al. Rats selectively bred for high running capacity have elevated hippocampal neurogenesis that is accompanied by a greater expression of hippocampal glucocorticoid receptors and altered contextual fear conditioning.
671.04. JM Saikia et al. Treadmill exercise training only enhances neurocognitive function if it is accompanied by significantly increases in aerobic capacity. Duke Univ., Durham, NC; Univ. of Michigan Med. Ctr., Ann Arbor, MI
Poster ZZ3 Ghrelin protects against stress by promoting the consumption of carbohydrates.T. Rodrigues. Z. Patterson. A. Abizaid. Carleton University, Ottawa, ON, Canada
In this world nothing can be said to be certain, except death and taxes.– Benjamin Franklin
Personally, I’d add stress to that.
There’s no question that chronic stress is a killer. Handled badly, stress can lead to anxiety, memory impairments, cardiovascular disease and sleep disorders. We all have our own strategies for coping with stress, some healthier than others. Me? I turn to food.
Apparently, so do bullied mice. Mice are social creatures; when housed together, larger and meaner ones will quickly assert dominance. The little guys have it rough, usually showing signs of anxiety, depression and increased body weight within weeks.
The reason for their weight gain can be traced back to an increase in ghrelin, a hunger-causing (orexigenic) hormone produced in the stomach. Once released, ghrelin travels to the brain and binds to its receptors to increase calorie consumption. But not all foods are equal; new research from Carleton University suggests that ghrelin promotes the intake of comfort foods – specifically, carbohydrates- because they decrease the level of circulating stress hormones such as corticosterone.
In the study, researchers first measured the amount of chow that mice ate per day for 21 days. They then chronically stressed out one group of mice by putting a dominant bully into every cage; the two mice were separated by a see-through glass wall to reduce violence. Every day, the mice had 24hr access to a standard, high-carb chow and a 4hr-window to a fattier alternative. Compared to non-stressed controls, the bullied mice drastically increased their total calorie intake, paralleled by an increase in ghrelin levels but surprisingly normal corticosterone levels.
When researchers broke down in the increase in calories by the type of food, they uncovered an unexpected result: stressed-out mice did not eat more fat, but instead opted for more high-carb chow. In fact, this high-carb binge almost entirely accounted for the increase in total calorie consumption.
However, mice chow does contain ~50% of protein and fat. To rule out a preference towards these two macronutrients in combination, researchers repeated the experiment, but with sucrose solution as the alternative to high-carb chow. As before, stressed-out mice increased their intake of chow, but this time, they also doubled their intake of sugar water compared to their unstressed peers. At the same time, their corticosterone levels were normal, suggesting that they were coping fairly well in the face of daily terror.
Why is ghrelin triggering a preference for carbs? The answer might be internal stress management. When researchers feed both bullied and control groups the same standard chow (~50% carbs), effectively restricting access to stress eating, the bullied mice suffered numerous negative health effects. Their ghrelin and corticosterone levels shot through the roof. They had abnormally low blood sugar levels, signalling the onset of metabolic problems. They even showed signs of depression, refusing to swim when dropped into a deep container filled with water.
These data suggest that under stress, ghrelin levels rise and tip food preference towards high-carb rather than high-fat foods. To see if this is indeed the case, researchers turned to a strain of mice genetically engineered to lack ghrelin receptors. Normally, compared to wild-types, these mutants show similar patterns of eating and hormone regulation, although they tend to be slightly smaller. Once stressed, however, they didn’t respond by switching to the high-carb comfort chow, instead increasing their nibbling of fatty foods. Behaviourally, these mice could not cope – in the swimming task, they spent most of their time immobile, succumbing to their fates.
Researchers aren’t yet sure why ghrelin-induced carb – but not fat – intake helps to manage stress. One reason could be bioenergetics: stress alerts the brain that more energy is needed (and soon!) through ghrelin, which in turn increases the preference for glucose – a fast and efficient energy source. Or it could just be a matter of comfort. These mice grew up on standard mice chow, which just happens to be high in carbs. Perhaps, just like you and me, mice simply prefer familiar and comforting foods after a long, stressful day.
Locked-in syndrome is the stuff of nightmares: You’re awake and aware of your environment, taking in everything, but communicating out nothing. You can’t move – except, if you’re lucky, an occasional voluntary blink of the eye – can’t talk, can’t tell the world that YES I’M ALIVE AND CONSCIOUS AND THINKING.
Locked-in syndrome, or LIS, usually occurs from specific damage to the lower brain and brain stem, resulting in loss of control of nearly all voluntary muscles in the body except the eyes. Those suffering from TOTAL LIS loose the ability to blink at will as well, cutting off the only route of communication with the world. They’re literally trapped in their own minds.
Scientists have long tried to establish ways for LIS suffers to reach out. Since in most cases voluntary blinking is retained, the eyes have been considered the most effective way to communicate, such as by answering yes-no questions (one blink=yes, two=no), or using the number of blinks to spell out letters in a given alphabet. More recently, scientists have focused on brain-computer interfaces, EEG caps or fMRI* to directly measure brain activity and try to decode the meaning through machine-learning algorithms. Unfortunately, such methods often require surgery, expensive equipment and extensive training with the individual, the latter of which is nearly impossible to achieve with those completely locked-in. (*REALLY cool new stuff presented at the Canadian Association for Neuroscience conference a while back – will blog about it for sure!)
What if, instead of relying on training, we use a subconsciously controlled behavioral readout to measure consciousness and communicate instead?
Josef Stoll et al 2013. Pupil responses allow communication in locked-in syndrome patients. Current Biology. 23(15):647-648
The “behavior” in question is dilation of the pupil. Our pupil size automatically changes in response to external cues, such as sunlight; they also respond to internal cues, dilating during arousal and mental effort. Here, scientists cleverly exploited the latter – specifically, the effort needed to perform mental arithmetic – to use as a tool for LIS patients to control and dilate their pupils to signal response to a yes/no question.
Here’s the setup. Six healthy volunteers were first recruited to test out the system. To establish “ground truth”, scientists sat them down in front of a computer and asked them a yes/no binary question with one correct answer, such as “Are you 26 years old?” The computer then displayed a math problem while announcing “yes” through the speakers. Several seconds later, a different problem was displayed accompanied with the answer “no”. The crux of the system is that volunteers had to calculate the problem associated with the right answer and blatantly ignore the other. Since mental effort alone is sufficient to dilate pupils, it didn’t matter whether the volunteers got the right answer – they just had to try. A nearby camera closely monitored their pupil size as they went through the 30 trials.
Amazingly, the method had an 84-99% correct rate in the healthy volunteers (green, HC). Encouraged, researchers then tried out the system in 7 “typical” LIS patients, who suffered stroke-related damage in the brainstem but had normal cognitive function. Most patients could not go through the entire 30 trials, tiring out before the end. In this population, 3 patients achieve higher-than-chance results: the computer managed to decode the “right” answer 67-84% of the time (blue). The method is obviously not perfect; however, when one patient was re-tested the second day, the computer’s performance soared from 77% to 90% correct, suggesting that training the patients to become more adept at focusing their mental effort could enhance the system’s precision.
Taking one step further, the researchers next tried the system in LIS patients with much more severe brain damage, which could impede their cognitive function. Unfortunately, the computer’s decoding fluctuated around chance 38-59% for all of them (pink). However, there was a silver lining: when the same question was asked twice and the computer gave a consistent response both times, decoding was almost always correct. Finally, in a non-communicative minimally conscious state patient, when instructed to perform the calculation, the computer program managed the get 82% of the answers right (orange).
The system is not a magical mind-reading machine, being fairly flawed in its decoding accuracy. It will need more maturation to be useful for diagnosing consciousness, and most certainly cannot be used to question or interrogate LIS patients. However, the idea of using automatic behaviour as a means of communication is fascinating, as it (in theory) allows those with total LIS to reach out to the external world. The setup is cheap, simple and usable in daily life or remote areas without access to fancy machinery. Even in well-equipped hospitals, the system could offer a secondary approach to gauge a person’s consciousness.
After all, for those torturously trapped in their own minds, “pretty good” is still better than nothing.
Josef Stoll, Camille Chatelle, Olivia Carter, Christof Koch, Steven Laureys, & Wolfgang Einhäuser (2013). Pupil responses allow communication in locked-in syndrome patients Current Biology, 23 (15), 647-648 DOI: 10.1016/j.cub.2013.06.011
Not all fat tissues are created equal. There’s the canonical white fat, which we associate with jiggle-ly belly aesthetics, long-term immflamation and Type 2 diabetes. Too much white fat accumulation in the internal organs has even been linked to lower cognitive function in young adults. While these fat cells provide insulation, they’ve always been regarded as inert. That is, they stubbornly hold onto their stored energy, even in chilly environments.
Then there’s brown fat –once thought to only exist in babies- that guards AGAINST obesity and diabetes. If you wipe out brown fatty tissue in animals through genetic deletion, the animals’ weights skyrocket. Brown fat is also more mobile: in response to cold, it releases its stored energy as heat. In other words, you burn off calories.
Sound like a clear-cut good fat/bad fat situation? Think again: new research suggests that “bad” white fatty tissue has a trick up its sleeve. Not only does it respond to cold, it can DIRECTLY sense cold temperature without relying on nerves.
Huh? How can fatty tissues sense temperature?
Let’s backtrack and talk about how brown fat works first. Imagine you’re jumping into the frozen arctic ocean (brrr). The terrible cold almost instantaneously activates sensory nerves in your body. These nerves signal to the temperature control centre – the hypothalamus at the base of the brain – that heat is desperately needed. In response, the hypothalamus releases norepinephrine, the major neurotransmitter in the fight-or-flight sympathetic nervous system. This mobilizes a protein called UCP1 that is present in brown fatty tissue, which triggers it to release its energy stores. Here, activation of the hypothalamus and sympathetic nervous system is absolutely necessary – mice bred without norepinephrine receptors are unable to mount this fat burning response.
Or so it seems. Scientists took these norepinephrine receptor(beta-receptors) lacking mice and exposed them to a chilly 10 Celsius (50 F) for 20hrs. In normal mice, this activated thermogenesis-related genes in two populations of fat: between-shoulder brown fat and subcutaneous (under the skin) fat. Unsurprisingly, in beta-receptor lacking mice, the brown fat response was almost completely obliterated, since they aren’t getting the trigger signal. However, subcutaneous fat more-or-less retained their ability to respond to cold, evidenced by robust thermogenic gene activation. Intriguingly, visceral fat (fat deep in the abdomen surrounding organs) did not respond to cold at all, in both normal and mutant mice. Since one major difference between these two populations of fat is that subcutaneous fat is closer to the surface of the body, scientists wondered if some types of cells in subcutaneous fat can “feel” and respond to cold autonomously, without the need for nerve activation.
So what are these cells? Scientists took lab grown white and brown fat cells and directly cooled them down. For good measure, they also included beige cells – white fat cells that behave somewhat like brown fat – in the study.
As seen above, exposure to 31 Celsius (87.8 F) almost tripled the levels of UCP1 mRNA in white (3T3 and J6) and beige cells (D16 and X9), but didn’t change that in brown cells (9EB). Remember in fat-burning canon, UCP1 is the messenger that tells brown fat to start burning off its energy in response to signals from the nervous system. As you can see in the graph below, In white fat cells, the increase of UCP1 is reversible. That is, when heated back up to 37C, expression of UCP1 went back down to baseline. Furthermore, increasing ambient temperature did not induce the same change. This tells us that UCP1 didn’t upregulate due to a “oh my god I’m dying” general stress response, but that its increase is a specific response to decreased temperature.
Chronic cooling of white fat cells increased the expression of a whole array of thermogenic genes, hinting that they have acquired increased ability to generate heat. Indeed, lowering ambient temperature significantly increased white fat cell metabolism, as measured by the rate of oxygen consumption.
Since lab grown cells can behave a little wonky at times, scientists also took “primary” cells in fatty tissue obtained from mice and humans and repeated the experiment. As before, both subcutaneous and visceral fat (having a mix of white and brown cells) responded to cold by increasing UCP1; pure brown fat, on the other hand, remained stoic. To confirm that this change is norepinephrine-independent, the scientists also analyzed markers involved in this signalling pathway, and showed that they were not activated.
So what do these results tell us? In all honesty, not that much. It’s really cool that white fat cells can directly sense cold and respond to it – this tells us that unlike popular belief, fatty tissue have a rich and interesting life that exceeds its function of energy storage. But where does UCP1 upregulation lead? It seems like increased UCP1 raises white fat cell metabolism, but does it generate enough heat for it to matter physiologically? Will cold exposure eventually decrease white fatty tissue mass, or will it be gradually repleted in the body? Can fat cells eventually be turned into calorie-burning weight-loss machines?
Maybe their is some truth in promoting cold thermogenesis as a fat-loss measure after all.
Ye L, Wu J, Cohen P, Kazak L, Khandekar MJ, Jedrychowski MP, Zeng X, Gygi SP, & Spiegelman BM (2013). Fat cells directly sense temperature to activate thermogenesis. Proceedings of the National Academy of Sciences of the United States of America PMID: 23818608
TL;DR: Using a mouse model of OCD, scientist use LIGHT to activate a brain area called the orbitofrontal cortex, which in turn shut down a downstream region called the striatum, and stopped OCD mouse from excessive face washing. Like magic.
Obsessive-compulsive disorder is a tough nut to crack. For years now researchers have been baffled by the spectrum of ritualistic and repetitive behaviours that relentlessly haunt those with the disorder. What’s more, intrusive thoughts and behaviours are not unique to OCD; they’re also prevalent in autism, drug abuse and eating disorders. We know that antidepressants help reduce symptoms, but only at high doses. As a first-line treatment, they’re still pretty pathetic.
Imaging studies from people with OCD tell us that a whole NETWORK of brain regions is disturbed in the disorder, including the orbitofrontal cortex and striatum; areas are associated with impulsivity and repetitive behaviours. However the data is strictly correlational, as we can’t tweak specific circuits in a targeted way in humans (although there are up-and-coming methods like deep brain stimulation and transcranial magnetic stimulation that zap an entire brain regions and modulate function). So what is the source DRIVING OCD behaviours and how can we stop it?
E Burguière et al. (2013). Optogenetic Stimulation of Lateral Orbitofronto-Striatal Pathway Suppresses Compulsive Behaviors. Science, 340 (6137):1243-1246
Seeking causative answers, researchers turned to mice. Specifically, a strain of anxious, compulsively over-grooming mice that lack a protein called SAPAP3. SAPAP3 is a protein at the synapse, the space between neurons where they communicate through neurotransmitter release.
When chilling in their home cages, mice like to occasionally groom their faces, which keeps them clean and happy. SAPAP3 knockout mice, however, take it to a whole different league. Given the chance –often in response to a “trigger”- they’ll groom so much that they tear their faces raw, not unlike OCD patients who compulsively scrub their hands until they bleed (not a pretty picture). Taking advantage of this trait, researchers first wanted to see if these mice would respond to a neutral trigger, as often happens in OCD patients. They played a tone, and then placed a drop of water on the mice, which drives them to groom it off. Soon, both normal and SAPAP3 knockout mice learned that a tune predicts water, and started grooming in response to the tune, as well as the water drop.
However, as time went on, normal mice (WT, black line) gradually stopped grooming at the tone – they learned that it really wasn’t necessary (“Late training”, see how there’s only a little bump in the black line after the tone/green line?). SAPAP3 knockouts (red line), on the other hand, couldn’t inhibit their grooming behaviour, relentlessly pawing to both the tune and the water drop (“Late training”, see how there are two bumps in the red line,one after tone/green line and one after water/blue line?). This mimics the development of new OCD behaviours in patients, where a neutral trigger sets off a bout of repetitive actions.
So what is happening at the neural level? Researchers focused on the orbitofrontal cortex (OFC) and striatum, areas associated with inhibiting responses and impulsivity, to see if the neurons fired differently. What they saw was, although firing rates of the OFC didn’t differ between normal and SAPAP3 knockouts, in the striatum, neuron firing rates were way higher in the SAPAP3 knockouts, especially as training went on. As you can see below, while normal mice (black line) “tuned” their striatal activity so that it only peaked near the time of the water drop (green line), SAPAP3 knockouts (black) had two peaks, one timed close to the tune (blue line) and one to the drop (green line). This is in line with their two grooming bouts – after the tune and the water drop.
Why this lack of inhibition? We know that activity in the OFC inhibits a type of neurons called the medium spiny neurons in the striatum, which in turn limits the degree of activation in this brain region. Could it be, then, that SAPAP3 knockout mice are missing this brake?
To answer this question, researchers brought in optogenetics, the technique that uses light to control neural activity. They used genetic methods to express a protein called channelrhodopsin-2 specifically in the OFC of SAPAP3 mice. They then used a light probe to “activate” this channel – that is, allow it to open so that the influx of ions allows neurons to fire. In this case, light on = OFC fires dramatically; light off = OFC little to no firing. Since connectivity in the brain is OFC->striatum, researchers recorded neural activity in the downstream striatum to see what happens when OFC activates.
What they found was that when OFC is activated by light (purple), the striatum goes quiet (see how the purple line is lower than the control black line?). That is, ACTIVATION of the OFC shuts down the striatum. How do we put this in the context of behavior? After the tone, when SAPAP3 knockouts usually have a spike in striatal activity (black line); if researchers activate OFC with light, this suppresses striatum activation. Is this enough to stop the animals from obsessively grooming to the tune?
Indeed it is! With the light on and the OFC activated, SAPAP3 knockouts behaved just like normal mice, staying chill during the tone, but grooming once they felt the annoying water drop. What’s more, light activation also decreased the high amount of face pawing that these mice usually exhibit while going about their usual daily micey-business. So all together, OFC activation can shut down the striatum, which reduces neurotic obsessive behaviour, but leaves normal behaviour intact.
This study obviously doesn’t provide an immediate cure for OCD, but it shines light (haha) on the circuits that go haywire in the disorder. It illuminates what we SHOULD be targeting when developing therapies in the future. This isn’t the end-all of OCD research; tons more questions remain. For example, which subtypes of medium spiny neurons are mediating this effect? When normal mice (and humans) develop OCD, which brain area goes awry first? What are the molecular mechanisms behind the lack of inhibition seen in the striatum of OCD-like mice? Can the same manipulation treat OCD-like behaviours in other psychopathological models, such as autism and eating disorders?
A little more on the last question. For a long time now physicians have noticed a link between OCD-like behaviours and eating disorders, such as anorexia nervosa. A recent paper made the first step in discovering a potential molecular link between the two disorders. Using the same SAPAP3 knockout mice, these researchers found that if you knockout a gene called MC4R at the same time (or inhibit the protein using drugs), you can alleviate compulsive grooming seen in these mice.
What’s even more interesting, although MC4R knockouts are usually obese (left, MC4R-null), if you eliminate both MC4R and SAPAP3, you get a normal, non-obsessive slim mouse (right, double null). In this case, two wrongs made a right. Researchers think that this MC4R-SAPAP3 link may explain to some degree the co-occurance of OCD-like behaviors and maladaptive food intake. Scicurious has a great write-up of this study if you’d like to know more.
Burguière E, Monteiro P, Feng G, & Graybiel AM (2013). Optogenetic stimulation of lateral orbitofronto-striatal pathway suppresses compulsive behaviors. Science (New York, N.Y.), 340 (6137), 1243-6 PMID: 23744950
Many bloggers like to write about studies that advance our understanding on how the brain FUNCTIONS, including myself. Function, however, depends on the smooth running of processes both between neurons (circuits) and within neurons. Unfortunately things don’t always go smoothly, and sometimes broken, misshapen and aggregated proteins can build up in cells, disrupting their normal function. This happens in many major neurodegenerative diseases, including Huntington’s, Parkinson’s, Alzheimer’s and Lou Gehrig’s disease. While scientists don’t yet know if protein accumulation is the cause of the disease or a protective response against disease progression, we do know that eliminating these protein aggregates can (at least temporarily) alleviate symptoms, and in some cases, increase the lifespan of the individual (the latter has only been shown in flies though). Not surprisingly, there’s been considerable interest in upregulating protein degradation to treat these devastating diseases.
One major therapy hijacks an endogenous cellular process, autophagy, or “self-eating”. Autophagy is one of the ways cells spring clean. Proteins generally don’t last that long after they’re synthesized in cells. Think about it: milk spoils after a week in a 4C fridge. Proteins in neurons HAVE to operate at body temperature, are constantly bombarded with oxidants, macromolecular debris, and radiation. When they break to the point beyond repair, they get tagged for removal. In autophagy, certain proteins will recognize the tag (“adaptor proteins”), and through coordination with many different protein families, generate lipid “garbage bags” (called autophagosomes) around the broken proteins. The lipid vesicles are then trafficked along highways in the cell – microtubules – to the waste disposal center, the lysosome. The vesicles merge with the highly acidic lysosome, and the garbage is broken down and recycled. Organelles, such as ATP-generating mitochondria, can also be broken down in this way. Autophagy can be activated in other conditions as well, such as starvation (to generate energy) or stroke.
Breaking down too many cellular components is obviously bad for the cell. So the induction of autophagy is tightly controlled by cellular signals. One major inhibitor is mTORC, which is active during nutrient rich conditions. During starvation, mTORC becomes inhibited, and a second complex including Beclin-1 is activated, which then activates a group of proteins called the Vsp proteins in a cellular cascade, leading to formation of lipid vesicles. (TL;DR mTORC inhibitory, Beclin-1 stimulatory)
A common theme in neurodegenerative diseases is inhibition of autophagy. MANY things can go wrong with the process. In Alzheimer’s disease, mutated proteins can directly inhibit Beclin-1 complex thus locking up the “gas pedal”. The opposite holds for Huntington’s disease, where there is too much mTORC signaling, effectively stomping on the brake. Misshapen proteins aggregates may hide the “waste” tag, rendering adaptor proteins unable to recognize them. Or in the case of Huntington’s disease, mutant aggregates can “hog” too many adaptor proteins, hindering the degradation of other proteins. Sometimes autophagosomes can’t be trafficked properly; other times they can’t fuse with the lysosome, or the lysosome loses the enzymes necessary to degrade the cargo.
So what are some strategies currently tested to upregulate autophagy? One is to inhibit the “brake” mTORC with drugs such as rapamycin, or activate inhibitors of mTORC, such as Metformin (used clinically for Type II Diabetes). Other strategies aim to release Beclin-1 from intracellular compartments, so that it is more available to drive autophagy. Interestingly, several mood-regulating drugs including valproic acid (anti-seizure and mood stabilizer) and lithium (bipolar disorder) also seem to be effective at enhancing autophagy, though more research needs to be done.
A drug that can effectively and safely upregulate autophagy may be a promising cure for neurodegenerative diseases (as opposed to symptom management). While mutated protein aggregates may not be the cause for those diseases, eliminating them in various ways have time and again been shown to alleviate symptoms in animal disease models. Moving forward, there’re still many barriers to cross. One is to minimize side effects – most chemical inhibitors of mTORC are toxic since the complex is involved in other cellular processes as well. A potential strategy is to use peptide biomimetics, small peptides that mimick certain parts of a target protein required for function. For example, a recent paper in Nature described a small fragment of Beclin-1 that could induce autophagy by itself. These peptide drugs are considered to be much more specific and have fewer side effects than their chemical counterparts.
Another consideration is that upregulating autophagy may not be prudent in ALL disease cases. It really depends on where autophagy goes wrong. If induction is the problem, then upregulate away! However, if the problem is a defect in autophagosome trafficking or fusion with the lysosome, then upregulating autophagy would result in an accumulation of those garbage bags, which may be even more detrimental to the cell than aggregated proteins. (You want your garbage to go to the waste management facility, not get stuck on highways.)
Nevertheless, regulating autophagy may be one of the few ways to potentially treat multiple protein accumulation diseases regardless of initial reason of pathogenesis. It’s definitely a fast-developing field worth keeping an eye on.
Wong E, & Cuervo AM (2010). Autophagy gone awry in neurodegenerative diseases. Nature neuroscience, 13 (7), 805-11 PMID: 20581817
Shoji-Kawata, S., Sumpter, R., Leveno, M., Campbell, G., Zou, Z., Kinch, L., Wilkins, A., Sun, Q., Pallauf, K., MacDuff, D., Huerta, C., Virgin, H., Helms, J., Eerland, R., Tooze, S., Xavier, R., Lenschow, D., Yamamoto, A., King, D., Lichtarge, O., Grishin, N., Spector, S., Kaloyanova, D., & Levine, B. (2013). Identification of a candidate therapeutic autophagy-inducing peptide Nature, 494 (7436), 201-206 DOI: 10.1038/nature11866