Scientist Study

Your source for the latest research news

Tuesday, 12 January 2021

Cancer cells hibernate like bears to evade harsh chemotherapy


Tapping into an ancient evolutionary survival mechanism, cancer cells enter into a sluggish, slow-dividing state to survive the harsh environment created by chemotherapy or other targeted agents.

In research published January 7 in Cell, Princess Margaret Cancer Centre scientist Dr. Catherine O'Brien and team discovered that when under threat, all cancer cells – rather than just a subset – have the ability to transition into this protective state, where the cells "rest" until the threat, or chemotherapy, is removed.


It is the first study to identify that cancer cells hijack an evolutionary conserved program to survive chemotherapy. Furthermore, the researchers show that novel therapeutic strategies aimed at specifically targeting cancer cells in this slow-dividing state can prevent cancer regrowth.

"The tumour is acting like a whole organism, able to go into a slow-dividing state, conserving energy to help it survive," says Dr. O'Brien, who is also an Associate Professor in the Department of Surgery at the University of Toronto (U of T).

"There are examples of animals entering into a reversible and slow-dividing state to withstand harsh environments.

"It appears that cancer cells have craftily co-opted this same state for their survival benefit."

Dr. Aaron Schimmer, Director of the Research Institute and Senior Scientist at the Princess Margaret, notes that this research shows that cancer cells hibernate, like "bears in winter."

He adds: "We never actually knew that cancer cells were like hibernating bears. This study also tells us how to target these sleeping bears so they don't hibernate and wake up to come back later, unexpectedly.

"I think this will turn out to be an important cause of drug resistance, and will explain something we did not have a good understanding of previously."

Using human colorectal cancer cells, the researchers treated them with chemotherapy in a petri dish in the laboratory.

This induced a slow-dividing state across all the cancer cells in which they stopped expanding,requiring little nutrition to survive. As long as the chemotherapy remained in the dish, the cancer cells remained in this state.

In order to enter this low-energy state, the cancer cells have co-opted an embryonic survival program used by more than 100 species of mammals to keep their embryos safe inside their bodies in times of extreme environmental conditions, such as high or low temperatures or lack of food.

In this state, there is minimal cell division, greatly reduced metabolism, and embryo development is put on hold. When the environment improves, the embryo is able to continue normal development, with no adverse effects on the pregnancy.

Dr. O'Brien, who is a surgeon specializing in gastrointestinal cancer, explains that cancer cells under attack by the harsh chemotherapy environment are able to adopt the embryonic survival strategy.

"The cancer cells are able to hijack this evolutionarily conserved survival strategy, even as it seems to be lost to humans," she says, adding that all of the cancer cells enter this state in a co-ordinated manner, in order to survive.

Remembering a talk three years ago on cellular mechanisms driving this survival strategy in mouse embryos, Dr. O'Brien had an "Aha!" insight.

"Something clicked for me when I heard that talk," she said. "Could the cancer cells be hijacking this survival mechanism to survive chemotherapy?"

So Dr. O'Brien contacted the Toronto Mount Sinai Hospital researcher Dr. Ramalho-Santos who had given the original talk at the Princess Margaret.

She compared the gene expression profile of the cancer cells in the chemotherapy-induced, slow-dividing state to the paused mouse embryosin Dr. Ramalho-Santos' lab, and found that they were strikingly similar.

Similar to embryos, cancer cells in theslow-dividing state require activation of the cellular process called autophagy, meaning "self-devouring." This is a process in which the cell "devours" or destroys its own proteins or other cellular components to survive in the absence of other nutrients.

Dr. O'Brien tested a small molecule that inhibits autophagy, and found that the cancer cells did not survive. The chemotherapy killed the cancer cells without this protective mechanism.

"This gives us a unique therapeutic opportunity," says Dr. O'Brien. "We need to target cancer cells while they are in this slow-cycling, vulnerable state before they acquire the genetic mutations that drive drug-resistance.

"It is a new way to think about resistance to chemotherapy and how to overcome it."


Reference:

Sumaiyah K. Rehman et al. Colorectal Cancer Cells Enter a Diapause-like DTP State to Survive Chemotherapy. Cell, 2021 DOI: 10.1016/j.cell.2020.11.018

Saturday, 9 January 2021

How 'Iron Man' bacteria could help protect the environment


When Michigan State University's Gemma Reguera first proposed her new research project to the National Science Foundation, one grant reviewer responded that the idea was not "environmentally relevant."

As other reviewers and the program manager didn't share this sentiment, NSF funded the proposal. And, now, Reguera's team has shown that microbes are capable of an incredible feat that could help reclaim a valuable natural resource and soak up toxic pollutants.


"The lesson is that we really need to think outside the box, especially in biology. We just know the tip of the iceberg. Microbes have been on earth for billions of years, and to think that they can't do something precludes us from so many ideas and applications," said Reguera, a professor in the Department of Microbiology and Molecular Genetics.

Reguera's team works with bacteria found in soil and sediment known as Geobacter. In its latest project, the team investigated what happened to the bacteria when they encounter cobalt.

Cobalt is a valuable but increasingly scarce metal used in batteries for electric vehicles and alloys for spacecraft. It's also highly toxic to livings things, including humans and bacteria.

"It kills a lot of microbes," Reguera said. "Cobalt penetrates their cells and wreaks havoc."

But the team suspected Geobacter might be able to escape that fate. These microbes are a hardy bunch. They can block uranium contaminants from getting into groundwater, and they can power themselves by pulling energy from minerals containing iron oxide. "They respire rust," Reguera said.

Scientists know little about how microbes interact with cobalt in the environment, but many researchers—including one grant reviewer—believed that the toxic metal would be too much for the microbes.

But Reguera's team challenged that thinking and found Geobacter to be effective cobalt "miners," extracting the metal from rust without letting it penetrate their cells and kill them. Rather, the bacteria essentially coat themselves with the metal.

"They form cobalt nanoparticles on their surface. They metallize themselves and it's like a shield that protects them," Reguera said. "It's like Iron Man when he puts on the suit."

The team published its discovery in the journal Frontiers in Microbiology, with the research article first appearing online in late November, 2020. The Spartan team included Kazem Kashefi, an assistant professor in the Department of Microbiology and Molecular Genetics, and graduate students Hunter Dulay and Marcela Tabares, who are "two amazing and relatively junior investigators," Reguera said.

She sees this discovery as a proof-of-concept that opens the door to a number of exciting possibilities. For example, Geobacter could form the basis of new biotechnology built to reclaim and recycle cobalt from lithium-ion batteries, reducing the nation's dependence on foreign cobalt mines.

It also invites researchers to study Geobacter as a means to soak up other toxic metals that were previously believed to be death sentences for the bacteria. Reguera is particularly interested in seeing if Geobacter could help clean up cadmium, a metal that's found in industrial pollution that disproportionately affects America's most disadvantaged communities.

"This is a reminder to be creative and not limited in the possibilities. Research is the freedom to explore, to search and search and search," Reguera said. "We have textbook opinions about what microbes can and should do, but life is so diverse and colorful. There are other processes out there waiting to be discovered."


Reference:

Hunter Dulay et al. Cobalt Resistance via Detoxification and Mineralization in the Iron-Reducing Bacterium Geobacter sulfurreducens, Frontiers in Microbiology (2020). DOI: 10.3389/fmicb.2020.600463

Saturday, 2 January 2021

Primordial black holes and the search for dark matter from the multiverse


The Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) is home to many interdisciplinary projects which benefit from the synergy of a wide range of expertise available at the institute. One such project is the study of black holes that could have formed in the early universe, before stars and galaxies were born.


Such primordial black holes (PBHs) could account for all or part of dark matter, be responsible for some of the observed gravitational waves signals, and seed supermassive black holes found in the center of our Galaxy and other galaxies. They could also play a role in the synthesis of heavy elements when they collide with neutron stars and destroy them, releasing neutron-rich material. In particular, there is an exciting possibility that the mysterious dark matter, which accounts for most of the matter in the universe, is composed of primordial black holes. The 2020 Nobel Prize in physics was awarded to a theorist, Roger Penrose, and two astronomers, Reinhard Genzel and Andrea Ghez, for their discoveries that confirmed the existence of black holes. Since black holes are known to exist in nature, they make a very appealing candidate for dark matter.

The recent progress in fundamental theory, astrophysics, and astronomical observations in search of PBHs has been made by an international team of particle physicists, cosmologists and astronomers, including Kavli IPMU members Alexander Kusenko, Misao Sasaki, Sunao Sugiyama, Masahiro Takada and Volodymyr Takhistov.

To learn more about primordial black holes, the research team looked at the early universe for clues. The early universe was so dense that any positive density fluctuation of more than 50 percent would create a black hole. However, cosmological perturbations that seeded galaxies are known to be much smaller. Nevertheless, a number of processes in the early universe could have created the right conditions for the black holes to form.

One exciting possibility is that primordial black holes could form from the "baby universes" created during inflation, a period of rapid expansion that is believed to be responsible for seeding the structures we observe today, such as galaxies and clusters of galaxies. During inflation, baby universes can branch off of our universe. A small baby (or "daughter") universe would eventually collapse, but the large amount of energy released in the small volume causes a black hole to form.

An even more peculiar fate awaits a bigger baby universe. If it is bigger than some critical size, Einstein's theory of gravity allows the baby universe to exist in a state that appears different to an observer on the inside and the outside. An internal observer sees it as an expanding universe, while an outside observer (such as us) sees it as a black hole. In either case, the big and the small baby universes are seen by us as primordial black holes, which conceal the underlying structure of multiple universes behind their "event horizons." The event horizon is a boundary below which everything, even light, is trapped and cannot escape the black hole.

In their paper, the team described a novel scenario for PBH formation and showed that the black holes from the "multiverse" scenario can be found using the Hyper Suprime-Cam (HSC) of the 8.2m Subaru Telescope, a gigantic digital camera -- the management of which Kavli IPMU has played a crucial role -- near the 4,200 meter summit of Mt. Mauna Kea in Hawaii. Their work is an exciting extension of the HSC search of PBH that Masahiro Takada, a Principal Investigator at the Kavli IPMU, and his team are pursuing. The HSC team has recently reported leading constraints on the existence of PBHs in Niikura, Takada et. al. (Nature Astronomy 3, 524-534 (2019))

Why was the HSC indispensable in this research? The HSC has a unique capability to image the entire Andromeda galaxy every few minutes. If a black hole passes through the line of sight to one of the stars, the black hole's gravity bends the light rays and makes the star appear brighter than before for a short period of time. The duration of the star's brightening tells the astronomers the mass of the black hole. With HSC observations, one can simultaneously observe one hundred million stars, casting a wide net for primordial black holes that may be crossing one of the lines of sight.

The first HSC observations have already reported a very intriguing candidate event consistent with a PBH from the "multiverse," with a black hole mass comparable to the mass of the Moon. Encouraged by this first sign, and guided by the new theoretical understanding, the team is conducting a new round of observations to extend the search and to provide a definitive test of whether PBHs from the multiverse scenario can account for all dark matter.


Reference:

Alexander Kusenko, Misao Sasaki, Sunao Sugiyama, Masahiro Takada, Volodymyr Takhistov, Edoardo Vitagliano. Exploring Primordial Black Holes from the Multiverse with Optical Telescopes. Physical Review Letters, 2020; 125 (18) DOI: 10.1103/PhysRevLett.125.181304

Sunday, 27 December 2020

Korean artificial sun sets the new world record of 20-sec-long operation at 100 million degrees


The Korea Superconducting Tokamak Advanced Research (KSTAR), a superconducting fusion device also known as the Korean artificial sun, set the new world record as it succeeded in maintaining the high temperature plasma for 20 seconds with an ion temperature over 100 million degrees (Celsius).


On November 24 (Tuesday), the KSTAR Research Center at the Korea Institute of Fusion Energy (KFE) announced that in a joint research with the Seoul National University (SNU) and Columbia University of the United States, it succeeded in continuous operation of plasma for 20 seconds with an ion-temperature higher than 100 million degrees, which is one of the core conditions of nuclear fusion in the 2020 KSTAR Plasma Campaign.

It is an achievement to extend the 8 second plasma operation time during the 2019 KSTAR Plasma Campaign by more than 2 times. In its 2018 experiment, the KSTAR reached the plasma ion temperature of 100 million degrees for the first time (retention time: about 1.5 seconds).

To re-create fusion reactions that occur in the sun on Earth, hydrogen isotopes must be placed inside a fusion device like KSTAR to create a plasma state where ions and electrons are separated, and ions must be heated and maintained at high temperatures.

So far, there have been other fusion devices that have briefly managed plasma at temperatures of 100 million degrees or higher. None of them broke the barrier of maintaining the operation for 10 seconds or longer. It is the operational limit of normal-conducting device and it was difficult maintain a stable plasma state in the fusion device at such high temperatures for a long time.

In its 2020 experiment, the KSTAR improved the performance of the Internal Transport Barrier (ITB) mode, one of the next generation plasma operation modes developed last year and succeeded in maintaining the plasma state for a long period of time, overcoming the existing limits of the ultra-high-temperature plasma operation.

Director Si-Woo Yoon of the KSTAR Research Center at the KFE explained, "The technologies required for long operations of 100 million- plasma are the key to the realization of fusion energy, and the KSTAR's success in maintaining the high-temperature plasma for 20 seconds will be an important turning point in the race for securing the technologies for the long high-performance plasma operation, a critical component of a commercial nuclear fusion reactor in the future."


"The success of the KSTAR experiment in the long, high-temperature operation by overcoming some drawbacks of the ITB modes brings us a step closer to the development of technologies for realization of nuclear fusion energy," added Yong-Su Na, professor at the department of Nuclear Engineering, SNU, who has been jointly conducting the research on the KSTAR plasma operation.

Dr. Young-Seok Park of Columbia University who contributed to the creation of the high temperature plasma said: "We are honored to be involved in such an important achievement made in KSTAR. The 100 million-degree ion temperature achieved by enabling efficient core plasma heating for such a long duration demonstrated the unique capability of the superconducting KSTAR device, and will be acknowledged as a compelling basis for high performance, steady state fusion plasmas."

The KSTAR began operating the device last August and plans to continue its plasma generation experiment until December 10, conducting a total of 110 plasma experiments that include high-performance plasma operation and plasma disruption mitigation experiments, which are joint research experiments with domestic and overseas research organizations.

In addition to the success in high temperature plasma operation, the KSTAR Research Center conducts experiments on a variety of topics, including ITER researches, designed to solve complex problems in fusion research during the remainder of the experiment period.

The KSTAR is going to share its key experiment outcomes in 2020 including this success with fusion researchers across the world in the IAEA Fusion Energy Conference which will be held in May.

The final goal of the KSTAR is to succeed in a continuous operation of 300 seconds with an ion temperature higher than 100 million degrees by 2025.

KFE President Suk Jae Yoo stated, "I am so glad to announce the new launch of the KFE as an independent research organization of Korea. The KFE will continue its tradition of under-taking challenging researches to achieve the goal of mankind: the realization of nuclear fusion energy," he continued.

As of November 20, 2020, the KFE, formerly the National Fusion Research Institute, an affiliated organization of the Korea Basic Science Institute, was re-launched as an independent research organization.


Saturday, 19 December 2020

Plants can be larks or night owls just like us


Plants have the same variation in body clocks as that found in humans, according to new research that explores the genes governing circadian rhythms in plants.

The research shows a single letter change in their DNA code can potentially decide whether a plant is a lark or a night owl. The findings may help farmers and crop breeders to select plants with clocks that are best suited to their location, helping to boost yield and even the ability to withstand climate change.


The circadian clock is the molecular metronome which guides organisms through day and night—cockadoodledooing the arrival of morning and drawing the curtains closed at night. In plants, it regulates a wide range of processes, from priming photosynthesis at dawn through to regulating flowering time.

These rhythmic patterns can vary depending on geography, latitude, climate and seasons—with plant clocks having to adapt to cope best with the local conditions.

Researchers at the Earlham Institute and John Innes Centre in Norwich wanted to better understand how much circadian variation exists naturally, with the ultimate goal of breeding crops that are more resilient to local changes in the environment—a pressing threat with climate change.

To investigate the genetic basis of these local differences, the team examined varying circadian rhythms in Swedish Arabidopsis plants to identify and validate genes linked to the changing tick of the clock.

Dr. Hannah Rees, a postdoctoral researcher at the Earlham Institute and author of the paper, said: "A plant's overall health is heavily influenced by how closely its circadian clock is synchronised to the length of each day and the passing of seasons. An accurate body clock can give it an edge over competitors, predators and pathogens.

"We were interested to see how plant circadian clocks would be affected in Sweden; a country that experiences extreme variations in daylight hours and climate. Understanding the genetics behind body clock variation and adaptation could help us breed more climate-resilient crops in other regions."

The team studied the genes in 191 different varieties of Arabidopsis obtained from across the whole of Sweden. They were looking for tiny differences in genes between these plants which might explain the differences in circadian function.

Their analysis revealed that a single DNA base-pair change in a specific gene—COR28—was more likely to be found in plants that flowered late and had a longer period length. COR28 is a known coordinator of flowering time, freezing tolerance and the circadian clock; all of which may influence local adaptation in Sweden.

"It's amazing that just one base-pair change within the sequence of a single gene can influence how quickly the clock ticks," explained Dr. Rees.

The scientists also used a pioneering delayed fluorescence imaging method to screen plants with differently-tuned circadian clocks. They showed there was over 10 hours difference between the clocks of the earliest risers and latest phased plants—akin to the plants working opposite shift patterns. Both geography and the genetic ancestry of the plant appeared to have an influence.

"Arabidopsis thaliana is a model plant system," said Dr. Rees. "It was the first plant to have its genome sequenced and it's been extensively studied in circadian biology, but this is the first time anyone has performed this type of association study to find the genes responsible for different clock types.

"Our findings highlight some interesting genes that might present targets for crop breeders, and provide a platform for future research. Our delayed fluorescence imaging system can be used on any green photosynthetic material, making it applicable to a wide range of plants. The next step will be to apply these findings to key agricultural crops, including brassicas and wheat."

The results of the study have been published in the journal Plant, Cell and Environment.


Reference:

Hannah Rees et al, Naturally occurring circadian rhythm variation associated with clock gene loci in Swedish Arabidopsis accessions, Plant, Cell & Environment (2020). DOI: 10.1111/pce.13941

Land ecosystems are becoming less efficient at absorbing CO2


Land ecosystems currently play a key role in mitigating climate change. The more carbon dioxide (CO2) plants and trees absorb during photosynthesis, the process they use to make food, the less CO2 remains trapped in the atmosphere where it can cause temperatures to rise. But scientists have identified an unsettling trend - as levels of CO2 in the atmosphere increase, 86 percent of land ecosystems globally are becoming progressively less efficient at absorbing it.


Because CO2 is a main 'ingredient' that plants need to grow, elevated concentrations of it cause an increase in photosynthesis, and consequently, plant growth - a phenomenon aptly referred to as the CO2 fertilization effect, or CFE. CFE is considered a key factor in the response of vegetation to rising atmospheric CO2 as well as an important mechanism for removing this potent greenhouse gas from our atmosphere - but that may be changing.

For a new study published Dec. 10 in Science, researchers analyzed multiple field, satellite-derived and model-based datasets to better understand what effect increasing levels of CO2 may be having on CFE. Their findings have important implications for the role plants can be expected to play in offsetting climate change in the years to come.

"In this study, by analyzing the best available long-term data from remote sensing and state-of-the-art land-surface models, we have found that since 1982, the global average CFE has decreased steadily from 21 percent to 12 percent per 100 ppm of CO2 in the atmosphere," said Ben Poulter, study co-author and scientist at NASA's Goddard Space Flight Center. "In other words, terrestrial ecosystems are becoming less reliable as a temporary climate change mitigator."

What's Causing It?

Without this feedback between photosynthesis and elevated atmospheric CO2, Poulter said we would have seen climate change occurring at a much more rapid rate. But scientists have been concerned about how long the CO2 Fertilization Effect could be sustained before other limitations on plant growth kick in.

For instance, while an abundance of CO2 won't limit growth, a lack of water, nutrients, or sunlight - the other necessary components of photosynthesis -- will. To determine why the CFE has been decreasing, the study team took the availability of these other elements into account.

"According to our data, what appears to be happening is that there's both a moisture limitation as well as a nutrient limitation coming into play," Poulter said. "In the tropics, there's often just not enough nitrogen or phosphorus, to sustain photosynthesis, and in the high-latitude temperate and boreal regions, soil moisture is now more limiting than air temperature because of recent warming."

In effect, climate change is weakening plants' ability to mitigate further climate change over large areas of the planet.

Next Steps

The international science team found that when remote-sensing observations were taken into account - including vegetation index data from NASA's Advanced Very High Resolution Radiometer (AVHRR) and the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments - the decline in CFE is more substantial than current land-surface models have shown. Poulter says this is because modelers have struggled to account for nutrient feedbacks and soil moisture limitations - due, in part, to a lack of global observations of them.

"By combining decades of remote sensing data like we have done here, we're able to see these limitations on plant growth. As such, the study shows a clear way forward for model development, especially with new remote sensing observations of vegetation traits expected in coming years," he said. "These observations will help advance models to incorporate ecosystem processes, climate and CO2 feedbacks more realistically."

The results of the study also highlight the importance of the role of ecosystems in the global carbon cycle. According to Poulter, going forward, the decreasing carbon-uptake efficiency of land ecosystems means we may see the amount of CO2 remaining in the atmosphere after fossil fuel burning and deforestation start to increase, shrinking the remaining carbon budget.

"What this means is that to avoid 1.5 or 2°C warming and the associated climate impacts, we need to adjust the remaining carbon budget to account for the weakening of the plant CO2 Fertilization Effect," he said. "And because of this weakening, land ecosystems will not be as reliable for climate mitigation in the coming decades."


Reference:

Songhan Wang, Yongguang Zhang, Weimin Ju, Jing M. Chen, Philippe Ciais, Alessandro Cescatti, Jordi Sardans, Ivan A. Janssens, Mousong Wu, Joseph A. Berry, Elliott Campbell, Marcos Fernández-Martínez, Ramdane Alkama, Stephen Sitch, Pierre Friedlingstein, William K. Smith, Wenping Yuan, Wei He, Danica Lombardozzi, Markus Kautz, Dan Zhu, Sebastian Lienert, Etsushi Kato, Benjamin Poulter, Tanja G. M. Sanders, Inken Krüger, Rong Wang, Ning Zeng, Hanqin Tian, Nicolas Vuichard, Atul K. Jain, Andy Wiltshire, Vanessa Haverd, Daniel S. Goll, Josep Peñuelas. Recent global decline of CO2 fertilization effects on vegetation photosynthesis. Science, 2020; 370 (6522): 1295 DOI: 10.1126/science.abb7772

Friday, 18 December 2020

A new supersonic engine to travel anywhere in the world in (max) two hours


Although it has been explored for several decades, supersonic flight has never been able to rise to the same level as other propulsion technologies. The main reason lies in the technical difficulties it poses, and the physico-mechanical problems it can generate, as in the case of the infamous Concorde. However, a team of Chinese engineers recently announced the development of a new type of supersonic engine, the Sodramjet, which overcomes the classic obstacles posed by this type of engine. Although the first experimental tests were conclusive, a long work still awaits the researchers before a potential commercialization.


Since the retirement of the Concorde, commercial supersonic flight has been put on hold in Western countries. While innovation in faster engines continued, some serious issues with high speed engines prevented their development. Therefore, instead of pushing the speed limits, engineers focused on increasing fuel efficiency, reducing the carbon footprint of airliners, and increasing passenger capacity. However, a Chinese team has never taken their eyes off supersonic flight, and recently they took a big step forward.

According to the team, their new hypersonic jet engine can reach speeds of up to Mach 16 - or 19,000 kilometers per hour - and was stable when tested in a wind tunnel. Attach it to a plane and you can be anywhere in the world in two hours, according to the authors. The results were published in the Chinese Journal of Aeronautics.

The technical obstacles posed by the scramjet

The team says their engine, called Sodramjet, represents a significant advance in hypersonic propulsion. “Research over more than 70 years on hypersonic propulsion indicates that a revolutionary concept is really needed for the development of hypersonic engines. The concept of the Sodramjet engine may be a very promising choice and the work presented here strongly supports this idea,” the authors write in the article

The Sodramjet builds on existing technology known as a ramjet, which has been in development since Hungarian inventor Albert Fonó used a crude ramjet to increase the range of artillery. While normal jet engines use a compressor section of fan blades to compress intake air before sending it for combustion, ramjets rely on the forward motion of the plane to provide a compressed air flow and fast moving.

A breakthrough was then made on ramjets to produce a supersonic combustion model (scramjet), which keeps air flowing through the engine at supersonic speeds, unlike a ramjet which slows the air before combustion. But scramjets suffer from fatal flaws. Supersonic air creates shock waves that can block burning fuel.

Sodramjet: a supersonic engine with promising results

Instead, Zonglin Jiang and his colleagues at the Chinese Academy of Sciences in Beijing turned to the work of engineer Richard Morrison in 1980. He believed that the shock wave produced by the supersonic air could contain enough energy to continuously re-ignite the motor and maintain speeds of Mach 15 or more. Although his ideas never saw commercial application due to a lack of funding and the choice to pursue other ideas, Jiang put the idea into practice in the Sodramjet, and the results speak for themselves. themselves.

The Sodramjet was stable at hypersonic speeds and burned its hydrogen more efficiently as the speeds increased. The results prove that intrinsic shock waves in a hypersonic engine can sustain internal combustion, in line with Morrison's ideas nearly 40 years ago.

Although incredible, the Sodramjet engine is still a long way from being used in a commercial airliner. In addition, there are still issues to be resolved before the engine is fully functional. The shock waves that reignite combustion can sustain the thrust, but in so doing, produce surges in the engine that impact its stability. Additionally, speeds of this nature have been described in wind tunnels using scramjets before, but have not been verified on airplanes, so the engine will require much more testing before commercial use.


Reference:

The criteria for hypersonic airbreathing propulsion and its experimental verification 

Thursday, 17 December 2020

Can We Train AI To Adapt Like Human Brains?


Getting computers to "think" like humans is the holy grail of artificial intelligence, but human brains turn out to be tough acts to follow. The human brain is a master of applying previously learned knowledge to new situations and constantly refining what's been learned. This ability to be adaptive has been hard to replicate in machines.


Now, Salk researchers have used a computational model of brain activity to simulate this process more accurately than ever before. The new model mimics how the brain's prefrontal cortex uses a phenomenon known as "gating" to control the flow of information between different areas of neurons. It not only sheds light on the human brain, but could also inform the design of new artificial intelligence programs.

"If we can scale this model up to be used in more complex artificial intelligence systems, it might allow these systems to learn things faster or find new solutions to problems," says Terrence Sejnowski, head of Salk's Computational Neurobiology Laboratory and senior author of the new work, published on November 24, 2020, in Proceedings of the National Academy of Sciences.

The brains of humans and other mammals are known for their ability to quickly process stimuli--sights and sounds, for instance--and integrate any new information into things the brain already knows. This flexibility to apply knowledge to new situations and continuously learn over a lifetime has long been a goal of researchers designing machine learning programs or artificial brains. Historically, when a machine is taught to do one task, it's difficult for the machine to learn how to adapt that knowledge to a similar task; instead each related process has to be taught individually.

In the current study, Sejnowski's group designed a new computational modeling framework to replicate how neurons in the prefrontal cortex--the brain area responsible for decision-making and working memory--behave during a cognitive test known as the Wisconsin Card Sorting Test. In this task, participants have to sort cards by color, symbol or number--and constantly adapt their answers as the card-sorting rule changes. This test is used clinically to diagnose dementia and psychiatric illnesses but is also used by artificial intelligence researchers to gauge how well their computational models of the brain can replicate human behavior.

Previous models of the prefrontal cortex performed poorly on this task. The Sejnowski team's framework, however, integrated how neurons control the flow of information throughout the entire prefrontal cortex via gating, delegating different pieces of information to different subregions of the network. Gating was thought to be important at a small scale--in controlling the flow of information within small clusters of similar cells--but the idea had never been integrated into models through the whole network.

The new network not only performed as reliably as humans on the Wisconsin Card Sorting Task, but also mimicked the mistakes seen in some patients. When sections of the model were removed, the system showed the same errors seen in patients with prefrontal cortex damage, such as that caused by trauma or dementia.

"I think one of the most exciting parts of this is that, using this sort of modeling framework, we're getting a better idea of how the brain is organized," says Ben Tsuda, a Salk graduate student and first author of the new paper. "That has implications for both machine learning and gaining a better understanding of some of these diseases that affect the prefrontal cortex."

If researchers have a better understanding of how regions of the prefrontal cortex work together, he adds, that will help guide interventions to treat brain injury. It could suggest areas to target with deep brain stimulation, for instance.

"When you think about the ways in which the brain still surpasses state-of-the-art deep learning networks, one of those ways is versatility and generalizability across tasks with different rules," says study coauthor Kay Tye, a professor in Salk's Systems Neurobiology Laboratory and the Wylie Vale Chair. "In this new work, we show how gating of information can power our new and improved model of the prefrontal cortex."

The team next wants to scale up the network to perform more complex tasks than the card-sorting test and determine whether the network-wide gating gives the artificial prefrontal cortex a better working memory in all situations. If the new approach works under broad learning scenarios, they suspect that it will lead to improved artificial intelligence systems that can be more adaptable to new situations.


Reference: 

Tsuda B, Tye KM, Siegelmann HT, Sejnowski TJ. A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortex. PNAS. 2020;117(47)29872-29882. doi: 10.1073/pnas.2009591117