Understanding Alzheimer’s disease and related dementia through high-resolution MRI

This Science Short, written by Pulkit Khandelwal, summarizes a manuscript currently under review at Imaging Neuroscience, with a preprint available at https://arxiv.org/abs/2303.12237.

Dementia refers to the loss of memory, language, problem-solving and other thinking abilities that interferes with daily life activities. Dementia can be caused by several neurodegenerative diseases such as Alzheimer’s disease, Lewy body dementia, and frontotemporal dementia, amongst many others. These neurodegenerative diseases are characterized by multiple pathological processes jointly contributing to neurodegeneration in most patients. For example, Alzheimer’s disease (AD) pathology is characterized by two main proteins called tau tangles and beta-amyloid plaques, which are thought to lead to neurodegeneration and cognitive decline. But many patients diagnosed at autopsy with AD also exhibit other vascular diseases or proteins that cannot be reliably detected by imaging live patients. This makes it difficult for clinicians to determine to what extent cognitive decline in individual patients is driven by AD vs. other factors. Clinicians rely upon biomarkers, which are quantifiable characteristics that can help detect and track diseases. For example, it is known that the thickness of the cortical gray matter (the outer spaghetti-like brain structure) decreases over time in patients diagnosed with AD. So, a clinician can look at the thickness of the cortex and judge the prognosis of AD in a patient. The recent modest successes of AD treatments in clinical trials make it even more important to derive biomarkers that can detect and quantify mixed pathology, so that treatments can be prioritized for those most likely to benefit from them.

Associations between pathologies and measures of neurodegeneration, such as thickness and volume of the cortical gray matter or the subcortical areas, can identify patterns of neurodegeneration linked to specific pathologies. Such patterns can then help us create specific biomarkers for early detection of the disease. For example, as mentioned earlier, it is known that a decrease in cortical thickness is associated with Alzheimer’s disease, and accumulation of proteins in the medial temporal lobe region is linked with early onset of AD. These associations were found using imaging in live (antemortem) patients, but our understanding of biomarkers can be greatly aided by examining the brain tissue after death, aka, postmortem. After the autopsy of a person who has succumbed to AD, we can look at the brain tissue under the microscope (histology) and also obtain high-resolution magnetic resonance imaging (MRI) scans. Both techniques can help us understand the build-up of different pathological proteins contributing to the disease. Postmortem MRI is advantageous because it allows imaging at much greater resolution than antemortem and thereby allows structure-pathology associations to be examined with greater granularity and helps us visualize detailed and intricate neuroanatomy. 

Given the rising use of high resolution postmortem MRI in neurodegenerative disease research, automated techniques are imperative to effectively analyze such growing datasets. However, many of the approaches focus on antemortem MRI, and there is limited work on developing automated methods for postmortem MRI analysis. Therefore, in this landmark study, we developed a mathematical framework and a set of software tools to produce accurate delineations of the cortical gray matter; subcortical structures (caudate, putamen, globus pallidus, thalamus), white matter (WM) and white matter hyperintensities (WMH) in a set of 135 high resolution 7 tesla postmortem MRI scans of whole brain hemispheres. In other words, we developed a way to look at specific brain regions in high-resolution MRI. This data is part of several ongoing and previous clinical research programs at various laboratories within Penn Medicine and Penn Engineering where dementia patients are treated and then followed to autopsy after death to obtain imaging and pathology. This heterogeneous cohort spanned patients with a variety of diseases such as AD, amyotrophic lateral sclerosis, cerebrovascular disease, Lewy body disease, frontotemporal lobar dementia to name a few. See the paper for more details.

We studied regional patterns of associations at several anatomical locations between cortical thickness and the underlying neuropathology ratings of proteins such as tau and amyloid-beta, and measures of neuronal loss, including Braak staging (a measure to track the progress of AD). As expected, we found that decrease in cortical thickness and volume of the studied brain regions were associated with an increase in the protein build-up, an increase in neuronal loss, and higher Braak stages in areas associated with AD, such as the subregions of the hippocampus and medial temporal lobe. Therefore, postmortem MRI will be helpful for validating and refining measures derived from antemortem studies. Using the developed methods and tools, postmortem MRI will allow future work to study links between structural changes and other local processes beyond pathology, including inflammatory markers and gene expression.


Pulkit Khandelwal is a 5th year Bioengineering PhD student in the Penn Image Computing and Science Laboratory at the University of Pennsylvania under the supervision of Prof. Paul Yushkevich. He is interested in medical image analysis particularly in image segmentation and registration problems to aid image-guided surgery and develop computational methods and tools to understand neurodegenerative diseases. Previously, he completed his masters in computer science in the Shape Analysis Group at McGill University under the supervision of Prof. Kaleem Siddiqi and Prof. Louis Collins.

The Challenge in Decarbonizing Cement

 This Science Short, written by Maxwell Pisciotta, summarizes a section of: Pisciotta, M., Pilorgé, H., Feldmann, J., Jacobson, R., Davids, J., Swett, S., Sasso, Z., & Wilcox, J. (2022). Current state of industrial heating and opportunities for decarbonization. Progress in Energy and Combustion Science, 91, 100982. https://doi.org/10.1016/j.pecs.2021.100982.

Carbon dioxide (CO2) emissions in the atmosphere have been contributing to the increased global average temperature rise and climate change. To avoid the worst effect of climate change, the temperature rise needs to be limited to 1.5ºC, which will require reducing CO2 emissions, or decarbonization. In some sectors, decarbonization will require replacing existing industries or factories with others. An example would be replacing coal or natural gas power plants with solar or wind energy. These renewable sources would generate and deliver electricity without the CO2 emissions that would have been generated from the use of fossil fuels. In other sectors, such as heavy industry, the replacement is not as straightforward.

Take cement for example, cement is the building block for concrete and asphalt, and is used in the foundations of buildings, sidewalks, and streets. Cement is a mixture of calcium (Ca) and silica (SiO2), but not one that exists naturally; it must be manufactured. Cement is manufactured by taking limestone (CaCO3), a very abundant resource, and clay, heating them up in a large kiln, and fusing them together to get the calcium-silicate mixture for cement. This process results in CO2 emissions that come from the fossil fuels (coal or natural gas) used to heat this large kiln, but also from the limestone feedstock. As the limestone is heated, it breaks apart into calcium oxide (CaO) and CO2. This means that replacing the fossil fuel energy with renewable energy will not eliminate the CO2 emissions from this process. The emissions from cement production account for 5-8% of all CO2 emissions globally. The chemistry of the limestone feedstock makes this a tricky industry to decarbonize, not to mention, how important cement is in the built environment.

There is still a reason for optimism. There are naturally occurring rocks and minerals that may be used in concrete to replace cement, which would mean that less of it would need to be produced. Also, although carbon capture and storage (CCS) has come under great scrutiny for its proposed use for fossil-derived electricity generation, the cement industry would benefit greatly from this technology. The flue gas from the cement kiln would enter a column that houses specific chemistry that would selectively extract the CO2 from this stream, allowing the other gases; oxygen, nitrogen, and water; to pass through the stack into the atmosphere. When the chemistry is “full” of CO2 and cannot capture any more, it will be heated to release the CO2 into a storage cylinder. The chemistry can then be recycled and used again to continue capturing additional CO2.

Furthermore, designing cement kilns that can be run with an enriched-oxygen atmosphere can simplify the process for separating the CO2 from the environmentally-benign gases. When the oxygen, nitrogen, and water are present in the cement flue gas, one of the difficulties is separating the CO2 from the nitrogen because the molecules are very close in size and have very low condensing temperatures. However, with the use of an enriched-oxygen or pure oxygen atmosphere in the cement kiln, the flue gas will no longer have nitrogen, but rather, just CO2 and water. This means that the flue gas can be taken down to room temperature, where the water can be easily separated from the CO2 because it will be a liquid.

Both of these methods have the potential to address the CO2 generation from cement, but they do not necessarily reduce the use of fossil fuels in this process. At the volume that cement is currently produced, fossil fuel use is one of the only methods to heat the vast quantities of limestone that are needed for this purpose. However, replacing some of the coal that is currently used in cement production with waste biomass has proven to result in the same quality of cement production and has the potential to reduce fossil fuel use. Nearly 20% of the coal can be replaced with drop-in waste biomass, but if that percentage is to go up, the biomass may need to be pretreated to make its properties more similar to that of coal. Some of the waste biomass that has high carbon-content and is positioned as a beneficial replacement to coal includes nut shells and fruit pits. When these are burned, they still release CO2, but the difference is that this CO2 comes from the carbon within the biomass, or the carbon that was absorbed by the plants during their lifetimes. So, this ultimately does not add more CO2 to the atmosphere. To fully account for the CO2 reduction that comes from using waste biomass, a lifecycle assessment would need to be conducted to address any emissions that came from biomass pretreatment and transport.

Decarbonizing the cement industry is not as straightforward as some of the other proposed climate solutions and may benefit from technologies that are controversial, such as carbon capture and storage. However, it is important that cement facilities can evaluate the resources that are near their operations and make decisions based on those available to them in their efforts to decarbonize. This is particularly important if CO2 storage sites are not nearby or if there is very little waste biomass available. These natural constraints may indicate that a specific decarbonization strategy is more attractive than another in different regions. The difference in resources, and therefore, available strategies will also contribute to differences in the state’s roadmap for decarbonizing their industrial activities.

Max Pisciotta is a 3rd year PhD candidate in the chemical engineering department and their research is focused on decarbonization, carbon capture, and carbon removal technologies. Prior to Penn, they earned their B.S. and M.S. in Mechanical Engineering from Colorado School of Mines.

The Clean Energy Conversions Lab at Penn focuses on carbon management, specifically looking to limit atmospheric accumulation of carbon dioxide, opportunities for storing and using carbon dioxide once it's captured, and the industrial scale up of these solutions.

Educators’ familiarity and use of evidence-based practices for inclusion and retention of autistic students

This Science Short, written up by Alyssa Hernandez, summarizes the findings of: Locke, J., Hernandez, A. M., Joshi, M., Hugh, M. L., Bravo, A., Osuna, A., & Pullmann, M. D. (2022). Supporting the inclusion and retention of autistic students: Exploring teachers’ and paraeducators’ use of evidence-based practices in public elementary schools. Frontiers in Psychiatry, 13, 961219. https://doi.org/10.3389/fpsyt.2022.961219.

Educators in public schools are required to serve students in their least restrictive environment (LRE).  As part of the Individuals with Disabilities Education Act, LRE requires that children with disabilities are educated with their non-disabled peers to the maximum extent possible, typically in general education classrooms and often with aids and support services.  Evidence-based practices (EBPs) are practices that research studies, such as randomized clinical trials, have demonstrated to be efficacious in supporting autistic children and there are strong efforts to increase the use of EBPs in schools.  While there are many EBPs for autistic youth, educators’ knowledge of and use of EBPs are unclear. Eighty-six educators completed a survey about EBP familiarity, training, and use for included autistic children. Across roles, educators reported familiarity (98.8%), use (97.7%), and training (83.7%) in reinforcement. They reported the least familiarity with behavioral momentum, defined as the organization of behavior expectations in a sequence designed to increase persistence and the occurrence of low probability behaviors (29.1%); training in both video modeling, defined as video-recorded demonstration of a targeted behavior that assists in the learning of a desired behavior or skill, and peer-mediated intervention, in which peers directly promote autistic children’s social interactions and/or other individual learning goals (18.6%); and use of video modeling (14.0%). Follow-up interviews with 80 educators highlighted mixed understanding of EBP definitions and use. The results suggest that more complex EBPs, or multicomponent EBPs, are least families and least used by educators, especially among general educators in relation to their special education and paraeducator colleagues.  To increase familiarity and use of autism-specific EBPs in general education classroom, educators can incorporate training within pre-service teacher preparation programs.  Academic-community partnerships can be leveraged for educator training efforts and also provide an avenue to increase mutual EBP knowledge across education and research contexts.

These findings are informative for educators and education-type policy makers and reformers. One of my big takeaways was learning that EBPs, such as peer mediated intervention, thought to be supportive and aimed at treating core challenges for autistic students (e.g., social challenges), are some of the least known and used.  This exploratory study can be the grounds for conversations on how to increase EBP knowledge and use. 

Alyssa Hernandez is a first-year clinical psychology doctoral student in the Department of Psychology in the School of Arts & Sciences working with Dr. David Mandell at the Center for Mental Health in the Perelman School of Medicine.  Alyssa was formerly working with Dr. Jill Locke at the University of Washington’s School Mental Health Assessment, Research, and Training (SMART) Center.  The SMART Center’s mission is to promote high-quality, culturally-responsive programs, practices, and policies to meet the full range of social, emotional, and behavioral (SEB) needs of students in both general and special education contexts.  This research was conducted as part of Dr. Locke’s research program at the SMART Center.

Welcome to Science Shorts!

The PSPDG Outreach Team is kicking off a new initiative, Science Shorts, to help make Penn science accessible to broader audiences! For this project, we’re inviting scientists at the University of Pennsylvania to write jargon-free summaries of their recent work and to explain the significance and potential impacts of their research. We hope that these summaries will help you learn more about what’s going on inside the labs here at Penn!