Neuroscience

Articles and news from the latest research reports.

33 notes

image

How the brain coordinates speaking and breathing

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appeared in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

(Image credit: Image: Jose-Luis Olivares, MIT)

Filed under vocalization breathing brainstem phonation RAmVOC neurons neuroscience science

30 notes

Hearing study: each nerve fiber trains on it’s own

A complex network of nerve fibers and synapses in the brain is responsible for transmission of information. When a nerve cell is stimulated, it generates signals in the form of electrochemical impulses, which propagate along the membrane of long nerve cell projections called axons. How quickly the information is transmitted depends on various factors such as the diameter of the axon. In vertebrates, where the comparatively large brain is enclosed in a compact cranium, another space-saving mechanism plays a major role: myelination. This involves the formation of a biomembrane that wraps around the axon and significantly accelerates the speed of signal transmission. The thicker this myelin sheath, the faster the transmission.

“Even though myelination is an integral part of neural processing in vertebrate brains, its adaptive properties have not yet been comprehensively understood,” says Dr. Conny Kopp-Scheinpflug, neurobiologist at the LMU Biocenter. She is the principal investigator of a study recently published in the journal Proceedings of the National Academy of Sciences (PNAS), which reveals new insights into the principles of myelination. The researchers investigated the question as to how sensory stimulation affects the formation of the myelin layers. “We know that axons which are regularly stimulated have enhanced myelin sheath thickness,” explains Dr. Mihai Stancu, lead author of the paper. Accordingly, regular training improves transmission capability. It was unknown, however, whether this change takes place at the level of individual nerve fibers or if adaptive myelination is also transferred to neighboring, passive axons in a fiber bundle.

To answer this question, the scientists investigated the neural activity of mice. “We focused on the auditory system, because it allows separate activation of the left and right neural circuits,” explains Kopp-Scheinpflug. To this end, the team rendered the lab mice temporarily deaf in one ear by means of an earplug. This way, one side received stronger acoustic stimulation than the ear-plugged other side for the duration of the experiment. “Surprisingly, all the nerve fiber bundles we investigated in the brain contained axons that carried information from the right ear as well as axons transmitting information from the left ear,” says Stancu. The experimentally-induced one-sided deafness allowed the researchers to test their hypothesis.

Their results showed that in the mixed nerve fiber bundles, only the myelin sheaths of the axons that belonged to the non-plugged active ear were strengthened. Consequently, the active axons did not transfer adaptive changes in myelination to the other, passive fibers, even when they were located in close proximity. “The principle seems to hold that each axon trains on it’s own,” observes Kopp-Scheinpflug. “As such, the activity of one input channel cannot compensate for the deficits of another.” The authors conclude that varied sensory experience throughout the lifespan of a person is vitally important. “If you want to remain cognitively fit, you should give your brain comprehensive all-round training.”

(Source: lmu.de)

Filed under hearing nerve fibers myelination neural activity neuroscience science

26 notes

Schizophrenia and aging may share a common biological basis

Researchers from the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, Harvard Medical School, and McLean Hospital have uncovered a strikingly similar suite of changes in gene activity in brain tissue from people with schizophrenia and from older adults. These changes suggest a common biological basis for the cognitive impairment often seen in people with schizophrenia and in the elderly. 

In a study published today in Nature, the team describes how they analyzed gene expression in more than a million individual cells from postmortem brain tissue from 191 people. They found that in individuals with schizophrenia and in older adults without schizophrenia, two brain cell types called astrocytes and neurons reduced their expression of genes that support the junctions between neurons called synapses, compared to healthy or younger people. They also discovered tightly synchronized gene expression changes in the two cell types: when neurons decreased the expression of certain genes related to synapses, astrocytes similarly changed expression of a distinct set of genes that support synapses. 

The team called this coordinated set of changes the Synaptic Neuron and Astrocyte Program (SNAP). Even in healthy, young people, the expression of the SNAP genes always increased or decreased in a coordinated way in their neurons and astrocytes. 

“Science often focuses on what genes each cell type expresses on its own,” said Steve McCarroll, a co-senior author on the study and an institute member at the Broad Institute. “But brain tissue from many people, and machine-learning analyses of those data, helped us recognize a larger system. These cell types are not acting as independent entities, but have really close coordination. The strength of those relationships took our breath away.”

Schizophrenia is well-known for causing hallucinations and delusion, which can be at least partly treated with medications. But it also causes debilitating cognitive decline, which has no effective treatments and is common in aging as well. The new findings suggest that the cognitive changes in both conditions might involve similar cellular and molecular alterations in the brain.

“To detect coordination between astrocytes and neurons in schizophrenia and aging, we needed to study tissue samples from a very large number of individuals,” said Sabina Berretta, a co-senior author of the study, an associate professor at Harvard Medical School, and a researcher in the field of psychiatric disorders. “Our gratitude goes to all donors who chose to donate their brain to research to help others suffering from brain disorders and to whom we’d like to dedicate this work.” 

McCarroll is also director of genomic neurobiology for the Broad’s Stanley Center for Psychiatric Research and a professor at Harvard Medical School. Berretta also directs the Harvard Brain Tissue Resource Center (HBTRC), which provided tissue for the study. Emi Ling, a postdoctoral researcher in McCarroll’s lab, was the study’s first author.

SNAP insights

The brain works in large part because neurons connect with other neurons at synapses, where they pass signals to one another. The brain constantly forms new synapses and prunes old ones. Scientists think new synapses help our brains stay flexible, and studies — including previous efforts by scientists in McCarroll’s lab and international consortia — have shown that many genetic factors linked to schizophrenia involve genes that contribute to the function of synapses. 

In the new study, McCarroll, Berretta, and colleagues used single-nucleus RNA sequencing, which measures gene expression in individual cells, to better understand how the brain naturally varies across individuals. They analyzed 1.2 million cells from 94 people with schizophrenia and 97 without. 

They found that when neurons boosted expression of genes that encode parts of synapses, astrocytes increased the expression of a distinct set of genes involved in synaptic function. These genes, which make up the SNAP program, included many previously identified risk factors for schizophrenia. The team’s analyses indicated that both neurons and astrocytes shape genetic vulnerability for the condition. 

“Science has long known that neurons and synapses are important in risk for schizophrenia, but by framing the question a different way — asking what genes each cell type regulates dynamically — we found that astrocytes too are likely involved,” said Ling.

To their surprise, the researchers also found that SNAP varied greatly even among people without schizophrenia, suggesting that SNAP could be involved in cognitive differences in healthy humans. Much of this variation was explained by age; SNAP declined substantially in many — but not all — older individuals, including both people with and without schizophrenia. 

With better understanding of SNAP, McCarroll says he hopes it might be possible to identify life factors that positively influence SNAP, and develop medicines that help stimulate SNAP, as a way to treat the cognitive impairments of schizophrenia or help people maintain their cognitive flexibility as they age.

In the meantime, McCarroll, Berretta, and their team are working to understand if these changes are present in other conditions such as bipolar disorder and depression. They also aim to uncover the extent to which SNAP appears in other brain areas, and how SNAP affects learning and cognitive flexibility.

(Source: broadinstitute.org)

Filed under aging schizophrenia astrocytes gene expression neuroscience science

84 notes

Exposure to different kinds of music influences how the brain interprets rhythm

When listening to music, the human brain appears to be biased toward hearing and producing rhythms composed of simple integer ratios — for example, a series of four beats separated by equal time intervals (forming a 1:1:1 ratio).

However, the favored ratios can vary greatly between different societies, according to a large-scale study led by researchers at MIT and the Max Planck Institute for Empirical Aesthetics and carried out in 15 countries. The study included 39 groups of participants, many of whom came from societies whose traditional music contains distinctive patterns of rhythm not found in Western music.

“Our study provides the clearest evidence yet for some degree of universality in music perception and cognition, in the sense that every single group of participants that was tested exhibits biases for integer ratios. It also provides a glimpse of the variation that can occur across cultures, which can be quite substantial,” says Nori Jacoby, the study’s lead author and a former MIT postdoc, who is now a research group leader at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany.

The brain’s bias toward simple integer ratios may have evolved as a natural error-correction system that makes it easier to maintain a consistent body of music, which human societies often use to transmit information.

“When people produce music, they often make small mistakes. Our results are consistent with the idea that our mental representation is somewhat robust to those mistakes, but it is robust in a way that pushes us toward our preexisting ideas of the structures that should be found in music,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

McDermott is the senior author of the study, which appeared in Nature Human Behaviour. The research team also included scientists from more than two dozen institutions around the world.

A global approach

The new study grew out of a smaller analysis that Jacoby and McDermott published in 2017. In that paper, the researchers compared rhythm perception in groups of listeners from the United States and the Tsimane’, an Indigenous society located in the Bolivian Amazon rainforest.

To measure how people perceive rhythm, the researchers devised a task in which they play a randomly generated series of four beats and then ask the listener to tap back what they heard. The rhythm produced by the listener is then played back to the listener, and they tap it back again. Over several iterations, the tapped sequences became dominated by the listener’s internal biases, also known as priors.

“The initial stimulus pattern is random, but at each iteration the pattern is pushed by the listener’s biases, such that it tends to converge to a particular point in the space of possible rhythms,” McDermott says. “That can give you a picture of what we call the prior, which is the set of internal implicit expectations for rhythms that people have in their heads.”

When the researchers first did this experiment, with American college students as the test subjects, they found that people tended to produce time intervals that are related by simple integer ratios. Furthermore, most of the rhythms they produced, such as those with ratios of 1:1:2 and 2:3:3, are commonly found in Western music.

The researchers then went to Bolivia and asked members of the Tsimane’ society to perform the same task. They found that Tsimane’ also produced rhythms with simple integer ratios, but their preferred ratios were different and appeared to be consistent with those that have been documented in the few existing records of Tsimane’ music.

“At that point, it provided some evidence that there might be very widespread tendencies to favor these small integer ratios, and that there might be some degree of cross-cultural variation. But because we had just looked at this one other culture, it really wasn’t clear how this was going to look at a broader scale,” Jacoby says.

To try to get that broader picture, the MIT team began seeking collaborators around the world who could help them gather data on a more diverse set of populations. They ended up studying listeners from 39 groups, representing 15 countries on five continents — North America, South America, Europe, Africa, and Asia.

“This is really the first study of its kind in the sense that we did the same experiment in all these different places, with people who are on the ground in those locations,” McDermott says. “That hasn’t really been done before at anything close to this scale, and it gave us an opportunity to see the degree of variation that might exist around the world.”

Cultural comparisons

Just as they had in their original 2017 study, the researchers found that in every group they tested, people tended to be biased toward simple integer ratios of rhythm. However, not every group showed the same biases. People from North America and Western Europe, who have likely been exposed to the same kinds of music, were more likely to generate rhythms with the same ratios. However, many groups, for example those in Turkey, Mali, Bulgaria, and Botswana showed a bias for other rhythms.

“There are certain cultures where there are particular rhythms that are prominent in their music, and those end up showing up in the mental representation of rhythm,” Jacoby says.

The researchers believe their findings reveal a mechanism that the brain uses to aid in the perception and production of music.

“When you hear somebody playing something and they have errors in their performance, you’re going to mentally correct for those by mapping them onto where you implicitly think they ought to be,” McDermott says. “If you didn’t have something like this, and you just faithfully represented what you heard, these errors might propagate and make it much harder to maintain a musical system.”

Among the groups that they studied, the researchers took care to include not only college students, who are easy to study in large numbers, but also people living in traditional societies, who are more difficult to reach. Participants from those more traditional groups showed significant differences from college students living in the same countries, and from people who live in those countries but performed the test online.

“What’s very clear from the paper is that if you just look at the results from undergraduate students around the world, you vastly underestimate the diversity that you see otherwise,” Jacoby says. “And the same was true of experiments where we tested groups of people online in Brazil and India, because you’re dealing with people who have internet access and presumably have more exposure to Western music.”

The researchers now hope to run additional studies of different aspects of music perception, taking this global approach.

“If you’re just testing college students around the world or people online, things look a lot more homogenous. I think it’s very important for the field to realize that you actually need to go out into communities and run experiments there, as opposed to taking the low-hanging fruit of running studies with people in a university or on the internet,” McDermott says.

Filed under music rhythms perception musical intervals neuroscience science

19 notes

image

Opening a window on the brain

The human brain has billions of neurons. Working together, they enable higher-order brain functions such as cognition and complex behaviors. To study these higher-order brain functions, it is important to understand how neural activity is coordinated across various brain regions. Although techniques such as functional magnetic resonance imaging (fMRI) are able to provide insights into brain activity, they can show only so much information for a given time and area. Two-photon microscopy involving the use of cranial windows is a powerful tool for producing high-resolution images but conventional cranial windows are small, making it difficult to study distant brain regions at the same time.

Now, a team of researchers led by the Exploratory Research Center on Life and Living Systems (ExCELLS) and the National Institute for Physiological Sciences (NIPS) have introduced a new method for in vivo brain imaging, enabling large-scale and long-term observation of neuronal structures and activities in awake mice. This method is called the “nanosheet incorporated into light-curable resin” (NIRE) method, and it uses fluoropolymer nanosheets covered with light-curable resin to create larger cranial windows.

“The NIRE method is superior to previous methods because it produces larger cranial windows than previously possible, extending from the parietal cortex to the cerebellum, utilizing the biocompatible nanosheet and the transparent light-curable resin that changes in form from liquid to solid,” says lead author Taiga Takahashi of the Tokyo University of Science and ExCELLS.

In the NIRE method, light-curable resin is used to fix polyethylene-oxide–coated CYTOP (PEO-CYTOP), a bioinert and transparent nanosheet, onto the brain surface. This creates a “window” that fits tightly onto the brain surface, even the highly curved surface of the cerebellum, and maintains its transparency for a long time with little mechanical stress, allowing researchers to observe multiple brain regions of living mice.

“Additionally, we showed that the combination of PEO-CYTOP nanosheets and light-curable resin enabled the creation of stronger cranial windows with greater transparency for longer periods of time compared with our previous method. As a result, there were few motion artifacts, that is, distortions in the images caused by the movements of awake mice,” says Takahashi.

The cranial windows allowed for high-resolution imaging with sub-micrometer resolution, making them suitable for observing the morphology and activity of fine neural structures.

“Importantly, the NIRE method enables imaging to be performed for a longer period of more than 6 months with minimal impact on transparency. This should make it possible to conduct longer-term research on neuroplasticity at various levels—from the network level to the cellular level—as well as during maturation, learning, and neurodegeneration,” explains corresponding author Tomomi Nemoto at ExCELLS and NIPS.

This study is a significant achievement in the field of neuroimaging because this novel method provides a powerful tool for researchers to investigate neural processes that were previously difficult or impossible to observe. Specifically, the NIRE method’s ability to create large cranial windows with prolonged transparency and fewer motion artifacts should allow for large-scale, long-term, and multi-scale in vivo brain imaging.

“The method holds promise for unraveling the mysteries of neural processes associated with growth and development, learning, and neurological disorders. Potential applications include investigations into neural population coding, neural circuit remodeling, and higher-order brain functions that depend on coordinated activity across widely distributed regions,” says Nemoto.

In sum, the NIRE method provides a platform for investigating neuroplastic changes at various levels over extended periods in animals that are awake and engaged in various behaviors, which presents new opportunities to enhance our understanding of the brain’s complexity and function.

(Image caption: In vivo two-photon imaging through a large cranial window extending from the cerebral cortex to cerebellum. Credit: ExCELLS/NINS)

Filed under neural activity brain imaging cerebral cortex NIRE neuroscience science

31 notes

Air pollution particles linked to development of Alzheimer’s

Magnetite, a tiny particle found in air pollution, can induce signs and symptoms of Alzheimer’s disease, new research suggests.

Alzheimer’s disease, a type of dementia, leads to memory loss, cognitive decline, and a marked reduction in quality of life. It impacts millions globally and is a leading cause of death in older individuals.

The study, Neurodegenerative effects of air pollutant particles: Biological mechanisms implicated for early-onset Alzheimer’s disease, led by Associate Professor Cindy Gunawan and Associate Professor Kristine McGrath from the University of Technology Sydney (UTS) was recently published in Environment International.

The research team, from UTS, UNSW Sydney and the Agency for Science, Technology and Research in Singapore, examined the impact of air pollution on brain health in mice, as well as in human neuronal cells in the lab. 

Their aim was to better understand how exposure to toxic air pollution particles could lead to Alzheimer’s disease. 

“Fewer than 1% of Alzheimer’s cases are inherited, so it is likely that the environment and lifestyle play a key role in the development of the disease,” said Associate Professor Gunawan, from the Australian Institute for Microbiology and Infection (AIMI).

“Previous studies have indicated that people who live in areas with high levels of air pollution are at greater risk of developing Alzheimer’s disease. Magnetite, a magnetic iron oxide compound, has also been found in greater amounts in the brains of people with Alzheimer’s disease. 

“However, this is the first study to look at whether the presence of magnetite particles in the brain can indeed lead to signs of Alzheimer’s,” she said. 

The researchers exposed healthy mice and those genetically predisposed to Alzheimer’s to very fine particles of iron, magnetite, and diesel hydrocarbons over four months. They found that magnetite induced the most consistent Alzheimer’s disease pathologies.

This included the loss of neuronal cells in the hippocampus, an area of the brain crucial for memory, and in the somatosensory cortex, an area that processes sensations from the body. Increased formation of amyloid plaque was seen in mice already predisposed to Alzheimer’s.

The researchers also observed behavioural changes in the mice that were consistent with Alzheimer’s disease including increased stress and anxiety and short-term memory impairment, the latter particularly in the genetically predisposed mice.

“Magnetite is a quite common air pollutant. It comes from high-temperature combustion processes like vehicle exhaust, wood fires and coal-fired power stations as well as from brake pad friction and engine wear,” said Associate Professor McGrath from the UTS School of Life Sciences.

“When we inhale air pollutant, these particles of magnetite can enter the brain via the lining of the nasal passage, and from the olfactory bulb, a small structure on the bottom of the brain responsible for processing smells, bypassing the blood-brain barrier,” she said.

The researchers found that magnetite induced an immune response in the mice and in the human neuronal cells in the lab. It triggered inflammation and oxidative stress, which in turn led to cell damage. Inflammation and oxidative stress are significant factors known to contribute to dementia.

“The magnetite-induced neurodegeneration is also independent of the disease state, with signs of Alzheimer’s seen in the brains of healthy mice,” said Dr Charlotte Fleming, a co-first author from the UTS School of Life Sciences.

The results will be of interest to health practitioners and policymakers. It suggests that people should take steps to reduce their exposure to air pollution as much as possible, and consider methods to improve air quality and reduce the risk of neurodegenerative disease.

The study has implications for air pollution guidelines. Magnetite particles should be included in the recommended safety threshold for air quality index, and increased measures to reduce vehicle and coal-fired power station emissions are also needed. 

(Source: uts.edu.au)

Filed under alzheimer's disease magnetite neurodegeneration air pollution oxidative stress inflammation neuroscience science

24 notes

Research Shows Continued Cocaine Use Disrupts Communication Between Major Brain Networks

A collaborative research endeavor by scientists in the Departments of Radiology, Neurology, and Psychology and Neuroscience at the UNC School of Medicine have demonstrated the deleterious effects of chronic cocaine use on the functional networks in the brain.

Their study titled “Network Connectivity Changes Following Long-Term Cocaine Use and Abstinence”, was highlighted by the editor of Journal of Neuroscience in “This Week in The Journal.” The findings show that continued cocaine use affects how crucial neural networks communicate with one another in the brain, including the default mode network (DMN), the salience network (SN), and the lateral cortical network (LCN).

“The disrupted communication between the DMN and SN can make it harder to focus, control impulses, or feel motivated without the drug,” said Li-Ming Hsu, PhD, assistant professor of radiology and lead author on the study. “Essentially, these changes can impact how well they respond to everyday situations, making recovery and resisting cravings more challenging.”

Hsu led this project during his postdoctoral tenure at the Center for Animal MRI in the Biomedical Research Imaging Center and the Department of Neurology. The work provides new insights into the brain processes that underlie cocaine addiction and creates opportunities for the development of therapeutic approaches and the identification of an imaging marker for cocaine use disorders.

The brain operates like an orchestra, where each instrumentalist has a special role crucial for creating a coherent piece of music. Specific parts of the brain need to work together to complete a task. The DMN is active during daydreams and reflections, the SN is crucial for attentiveness, and the CEN, much like a musical conductor, plays a role in our decision-making and problem-solving.

The research was motivated by observations from human functional brain imaging studies suggesting chronic cocaine use alters connectivity within and between the major brain networks. Researchers needed a longitudinal animal model to understand the relationship between brain connectivity and the development of cocaine dependence, as well as changes during abstinence.

Researchers employed a rat model to mimic human addiction patterns, allowing the models to self-dose by nose poke. Paired with advanced neuroimaging techniques, the behavioral approach enables a deeper understanding of the brain’s adaptation to prolonged drug use and highlights how addictive substances can alter the functioning of critical brain networks.

Hsu’s research team used functional MRI scans to explore the changes in brain network dynamics on models that self-administrated cocaine. Over a period of 10 days followed by abstinence, researchers observed significant alterations in network communication, particularly between the DMN and SN.

These changes were more pronounced with increased cocaine intake over the 10 days of self-administration, suggesting a potential target for reducing cocaine cravings and aiding those in recovery. The changes in these networks’ communication could also serve as useful imaging biomarkers for cocaine addiction.

The study also offered novel insights into the anterior insular cortex (AI) and retrosplenial cortex (RSC). The former is responsible for emotional and social processing; whereas, the latter controls episodic memory, navigation, and imagining future events. Researchers noted that there was a difference in coactivity between these two regions before and after cocaine intake. This circuit could be a potential target for modulating associated behavioral changes in cocaine use disorders.

“Prior studies have demonstrated functional connectivity changes with cocaine exposure; however, the detailed longitudinal analysis of specific brain network changes, especially between the anterior insular cortex (AI) and retrosplenial cortex (RSC), before and after cocaine self-administration, and following extended abstinence, provides new insights,” said Hsu.

(Source: news.unchealthcare.org)

Filed under default mode network cocaine addiction functional connectivity fMRI neuroscience science

142 notes

Neurons help flush waste out of brain during sleep

There lies a paradox in sleep. Its apparent tranquility juxtaposes with the brain’s bustling activity. The night is still, but the brain is far from dormant. During sleep, brain cells produce bursts of electrical pulses that cumulate into rhythmic waves – a sign of heightened brain cell function.

But why is the brain active when we are resting?

Slow brain waves are associated with restful, refreshing sleep. And now, scientists at Washington University School of Medicine in St. Louis have found that brain waves help flush waste out of the brain during sleep. Individual nerve cells coordinate to produce rhythmic waves that propel fluid through dense brain tissue, washing the tissue in the process.

“These neurons are miniature pumps. Synchronized neural activity powers fluid flow and removal of debris from the brain,” explained first author Li-Feng Jiang-Xie, PhD, a postdoctoral research associate in the Department of Pathology & Immunology. “If we can build on this process, there is the possibility of delaying or even preventing neurological diseases, including Alzheimer’s and Parkinson’s disease, in which excess waste – such as metabolic waste and junk proteins – accumulate in the brain and lead to neurodegeneration.”

The findings are published Feb. 28 in Nature.

Brain cells orchestrate thoughts, feelings and body movements, and form dynamic networks essential for memory formation and problem-solving. But to perform such energy-demanding tasks, brain cells require fuel. Their consumption of nutrients from the diet creates metabolic waste in the process.

“It is critical that the brain disposes of metabolic waste that can build up and contribute to neurodegenerative diseases,” said Jonathan Kipnis, PhD, the Alan A. and Edith L. Wolff Distinguished Professor of Pathology & Immunology and a BJC Investigator. Kipnis is the senior author on the paper. “We knew that sleep is a time when the brain initiates a cleaning process to flush out waste and toxins it accumulates during wakefulness. But we didn’t know how that happens. These findings might be able to point us toward strategies and potential therapies to speed up the removal of damaging waste and to remove it before it can lead to dire consequences.”

But cleaning the dense brain is no simple task. Cerebrospinal fluid surrounding the brain enters and weaves through intricate cellular webs, collecting toxic waste as it travels. Upon exiting the brain, contaminated fluid must pass through a barrier before spilling into the lymphatic vessels in the dura mater – the outer tissue layer enveloping the brain underneath the skull. But what powers the movement of fluid into, through and out of the brain?

Studying the brains of sleeping mice, the researchers found that neurons drive cleaning efforts by firing electrical signals in a coordinated fashion to generate rhythmic waves in the brain, Jiang-Xie explained. They determined that such waves propel the fluid movement.

The research team silenced specific brain regions so that neurons in those regions didn’t create rhythmic waves. Without these waves, fresh cerebrospinal fluid could not flow through the silenced brain regions and trapped waste couldn’t leave the brain tissue.

“One of the reasons that we sleep is to cleanse the brain,” Kipnis said. “And if we can enhance this cleansing process, perhaps it’s possible to sleep less and remain healthy. Not everyone has the benefit of eight hours of sleep each night, and loss of sleep has an impact on health. Other studies have shown that mice that are genetically wired to sleep less have healthy brains. Could it be because they clean waste from their brains more efficiently? Could we help people living with insomnia by enhancing their brain’s cleaning abilities so they can get by on less sleep?”

Brain wave patterns change throughout sleep cycles. Of note, taller brain waves with larger amplitude move fluid with more force. The researchers are now interested in understanding why neurons fire waves with varying rhythmicity during sleep and which regions of the brain are most vulnerable to waste accumulation.

“We think the brain-cleaning process is similar to washing dishes,” neurobiologist Jiang-Xie explained. “You start, for example, with a large, slow, rhythmic wiping motion to clean soluble wastes splattered across the plate. Then you decrease the range of the motion and increase the speed of these movements to remove particularly sticky food waste on the plate. Despite the varying amplitude and rhythm of your hand movements, the overarching objective remains consistent: to remove different types of waste from dishes. Maybe the brain adjusts its cleaning method depending on the type and amount of waste.”

(Source: medicine.wustl.edu)

Filed under neural activity brain clearance cerebrospinal fluid brainwaves neuroscience science

25 notes

image

More than just neurons: A new model for studying human brain inflammation

The brain is typically depicted as a complex web of neurons sending and receiving messages. But neurons only make up half of the human brain. The other half—roughly 85 billion cells—are non-neuronal cells called glia. The most common type of glial cells are astrocytes, which are important for supporting neuronal health and activity. Despite this, most existing laboratory models of the human brain fail to include astrocytes at sufficient levels or at all, which limits the models’ utility for studying brain health and disease.

Now, Salk scientists have created a novel organoid model of the human brain—a three-dimensional collection of cells that mimics features of human tissues—that contains mature, functional astrocytes. With this astrocyte-rich model, researchers will be able to study inflammation and stress in aging and diseases like Alzheimer’s with greater clarity and depth than ever before. Already, the researchers have used the model to reveal a relationship between astrocyte dysfunction and inflammation, as well as a potentially druggable target for disrupting that relationship.

The findings were published in Nature Biotechnology.

“Astrocytes are the most abundant type of glial cell in the brain, yet they have been underrepresented in organoid models of the brain,” says senior author Rusty Gage, professor and Vi and John Adler Chair for Research on Age-Related Neurodegenerative Disease at Salk. “Our model rectifies this deficit, offering a glial-enriched human brain organoid that can be used to explore the many ways that astrocytes are essential for brain function, and how they respond to stress and inflammation in various neurological conditions.”

In the last 10 years, organoids have emerged as a prevalent tool to bridge the gap between cell and human studies. Organoids can mimic human development and organ generation better than other laboratory systems, allowing researchers to study how drugs or diseases affect human cells in a more realistic setting. Brain organoids are typically grown in culture dishes, but their limited capacity to efficiently produce certain brain cells like astrocytes has remained problematic.

Astrocytes develop through the same pathway as neurons, beginning first as a neuronal stem cell until a molecular switch flips and turns the cell’s fate from neuron to astrocyte. To create a brain organoid with abundant astrocyte populations, the team looked for a way to trigger this switch.

To do this, the researchers delivered specific gliogenic compounds to the organoid, looking to see if they would promote astrocyte formation. The team then began running tests to see whether astrocytes had developed and, if they had, how many and to what extent they had matured.

The brain organoids cultured in a dish still lacked the microenvironment and the neuronal structural arrangement of a human brain. To create a more human brain-like environment, researchers transplanted the organoids into mouse models, allowing them to further develop over several months.

“Our transplanted organoid model produced more sophisticated and differentiated astrocyte populations than would have been possible with older models,” says co-first author Lei Zhang, a former postdoctoral researcher in Gage’s lab. “What was really exciting is that we observed order in the organoids. The organization of functional cell groups in the human brain is very difficult to mimic in a laboratory setting, but these astrocytes in our organoid model were doing just that.”

After observing astrocyte subtype development and maturation in the transplanted organoids, the researchers aimed to investigate the role of astrocytes in the process of neuroinflammation. Aging and age-related neurological diseases have strong ties to the immune system and inflammation, and whether astrocytes are also involved in this relationship has long been a question for neuroscientists.

To test this, the researchers introduced a proinflammatory compound into the transplanted organoids and found that a subtype of astrocytes became activated and promoted further proinflammatory pathways. Additionally, they found that a molecule called CD38 was crucial in mediating metabolic and energetic stress in those reactive astrocytes. Knowing CD38 signaling plays this important role suggests that CD38 inhibitors may be able to alleviate the neuroinflammation and related stresses caused by these reactive astrocytes, says Gage.

“We have created a human brain model for research that is more similar to its real-life counterpart than ever before—it has all the major astrocyte subclasses found in the human cortex,” says co-first author Meiyan Wang, a postdoctoral researcher in Gage’s lab. “With this model, we have already found a link between inflammation and astrocyte dysfunction and, in the process, revealed CD38 as a potentially druggable target to disrupt that link.”

Their findings build on another recent model developed in the lab that featured a different glial cell type, called microglia. While this astrocyte-rich model is the most advanced yet, the team is already looking to improve and expand on their organoid model by incorporating additional brain cell types and promoting further cell maturation. In the meantime, they aim to use the sophisticated model to investigate brain function and dysfunction in new detail, with the hopes that their findings will lead to new interventions and therapeutics for neurological conditions like Alzheimer’s disease.

(Image caption: Human astrocytes (green) extending processes that wrap around the host blood vessel (magenta). Credit: Salk Institute)

Filed under astrocytes glial cells inflammation organoids alzheimer's disease neuroscience science

36 notes

How ketamine acts fast and slow

New treatments for depression are needed that act rapidly and also have sustained effects. Ketamine accomplishes this, but toxic side effects limit its long-term use. Scientists haven’t understood how ketamine was able to do both, which hindered drug development.

A new Northwestern Medicine study brings that goal one step closer. This work identifies mechanisms that enable ketamine to work rapidly and also have long-term effects. The short-term and longer-term effects both involve newborn neurons. However, the short-term effects depend upon activity of new neurons that already were born when the drug was taken while the longer-term effects are due to an increased number of newborn neurons that result from the drug. 

“This study is exciting, because it lays the groundwork for development of non-toxic treatments that exert antidepressant effects within hours like ketamine but that also have the longer-term sustained effects necessary for the treatment of depression,” said senior study author Dr. John Kessler, professor of neurology at Northwestern University Feinberg School of Medicine. “This is a tremendous advance for the field.”

The study was published in Cellular and Molecular Life Sciences.

Ketamine differs from most antidepressants because it produces antidepressant effects within hours instead of weeks like most other medications. This is enormously helpful for patients, potentially reducing their risk of death and suicide in the short term. But the drug’s toxic side effects limit its longer-term use.

Corresponding study author Dr. Radhika Rawat, a former research fellow in Kessler’s lab and a third-year medical student at Feinberg, had previously discovered that ketamine’s ability to produce a rapid antidepressant effect is the result of stimulating activity of newborn neurons, so that they fire more rapidly sending more messages to the rest of the brain. 

In Rawat’s new study, she investigated two questions: how does the sustained effect of ketamine work and is it different from the rapid effect? She found the sustained effect is, indeed, different. It works by increasing the number of immature cells that have increased activity and firing.

“To make an analogy, think of the young neurons as ‘teenagers’ who are texting their friends. Increasing the number of text messages spreads information rapidly — that is how ketamine acts rapidly. Increasing the number of teenagers also increases the spread of information, but it takes time for them to be born and mature — that is why there are delayed but longer-term effects.”

Rawat also found that the longer-term effects of ketamine occur by acting on the BMP (bone morphogenetic protein) signaling pathway in the hippocampus. The Kessler lab has previously shown that decreased BMP signaling is a common pathway for the action of standard antidepressants. The new study shows this is also true for the sustained effect of ketamine.

(Source: news.northwestern.edu)

Filed under ketamine antidepressants depression hippocampus neuroscience science

free counters