Scientists Study

Your source for the latest research news

Sunday, 9 August 2020

These Water Beetles Make Their Escape Out of a Frog's Butt After Being Swallowed


The water scavenger beetle Regimbartia attenuata isn't known for much.

The family of beetles they're part of can be pests in fish hatcheries, and they're well suited to the humid tropics.



But now, R. attenuata is giving this beetle clan a new claim to fame – thanks to the ability to quickly wiggle its way out of a frog butt after being eaten.

"Here, I report active escape of the aquatic beetle R. attenuata from the vents of five frog species via the digestive tract," Kobe University ecologist Shinji Sugiura writes in a new paper.

"Although adult beetles were easily eaten by frogs, 90 percent of swallowed beetles were excreted within six hours after being eaten and, surprisingly, were still alive."

In what has to be one of the weirdest experimental setups we've seen in a while, Sugiura took R. attenuata and black-spotted pond frogs (Pelophylax nigromaculatus), placed them in a lab, let the frogs eat the beetles, and then recorded how long it took the beetles to emerge from… the other end.

And look, they were speedy. When Sugiura put wax on the beetles' legs (therefore stopping them from moving) they took between 38 and 150 hours to be digested and eventually excreted. Those little guys did not survive the ordeal.

But when the beetles were eaten with all their movement faculties intact, the vast majority of them emerged unscathed in just a few hours, and one particularly prompt bug got out of there in just under 7 minutes.

After their ordeal, they lived long, happy lives for weeks afterwards, seemingly unfazed by their trip through the digestive system.



Most of the time, when a creature emerges, still alive, from the back end of a predator like this, it's a passive situation. Usually the creature in question has specific adaptations for these journeys, allowing them to survive extreme pH and no oxygen for quite a while.

But this water beetle doesn't seem to survive if it sits idly and waits to emerge via the frog's digestion mechanisms. Instead, in what's being called the first documented "active prey escape", the little beetle powers through the frogs' oesophagus, stomach, and small and large intestine, until it reaches the cloaca, which the paper calls the "vent".

Once the beetle reaches that impasse, Sugiura thinks that the bug might have another trick up its tiny sleeve.

"R. attenuata cannot exit through the vent without inducing the frog to open it because sphincter muscle pressure keeps the vent closed," Sugiura writes in the paper.

"Individuals were always excreted head first from the frog vent, suggesting that R. attenuata stimulates the hind gut, urging the frog to defecate."

But not all water beetles have the same escape act as R. attenuata. Sugiura also experimented with other beetles and other types of frogs.

The four other frog species were not much of an issue for R. attenuata, with the large majority of beetles exiting unharmed the same way as in the black-spotted pond frog.

But when the pond frog was provided with a different beetle that's part of the same family - Enochrus japonicus, the poor things did not fare as well. All went in one end, and then over 48 hours later, all were excreted, each one very much dead.

By now we're sure you're thinking you have way more information about frog butt escapes then you ever wanted to know, but there are still many unanswered questions.

For example, what exactly is the beetle doing in there inside the frog to make this rear-end exit possible? And are there any other water beetles that can Houdini their way out of a butthole like this?

Only time will tell, but it's important work, and we promise to be on the case.

The research has been published in Current Biology.



Reference:

Active escape of prey from predator vent via the digestive tract
Shinji Sugiura, DOI: https://doi.org/10.1016/j.cub.2020.06.026

Friday, 7 August 2020

Researchers discover how plants distinguish beneficial from harmful microbes


Legume plants know their friends from their enemies, and now we know how they do it at the molecular level. Plants recognize beneficial microbes and keep harmful ones out, which is important for healthy plants production and global food security. Scientists have now discovered how legumes use small, well-defined motifs in receptor proteins to read molecular signals produced by both pathogenic and symbiotic microbes. These remarkable findings have enabled the researchers to reprogram immune receptors into symbiotic receptors, which is the first milestone for engineering symbiotic nitrogen-fixing symbiosis into cereal crops.



Legume plants fix atmospheric nitrogen with the help of symbiotic bacteria, called Rhizobia, which colonize their roots. Therefore, plants have to be able to precisely recognize their symbiont to avoid infection by pathogenic microbes. To this end, legumes use different LysM receptor proteins located on the outer cell surface of their roots. In the study published in Science, an international team of researchers led by Aarhus University show that pathogenic (chitin) or symbiotic signaling molecules (Nod factors) are recognized by small molecular motifs on the receptors that direct the signaling output towards either antimicrobial defense or symbiosis.

All land plants have LysM receptors that ensure detection of various microbial signals, but how a plant decides to mount a symbiotic or an immune response towards an incoming microbe is unknown. "We started by asking a basic and, maybe at start, naïve question: Can we identify the important elements by using very similar receptors, but with opposing function as background for a systematic analysis?" says Zoltán Bozsoki. "The first crystal structure of a Nod factor receptor was a breakthrough. It gave us a better understanding of these receptors and guided our efforts to engineer them in plants." Kira Gysel adds.

The study combines the structure-assisted dissection of defined regions in LysM receptors for biochemical experiments and in planta functional analysis. "To really understand these receptors, we needed to work closely together and combine structural biology and biochemistry with the systematic functional tests in plants," says Simon Boje Hansen. By using this approach, the researchers identified previously unknown motifs in the LysM1 domain of chitin and Nod factor receptors as determinants for immunity and symbiosis. "It turns out that there are only very few, but important, residues that separate an immune from a symbiotic receptor and we now identified these and demonstrate for the first time that it is possible to reprogram LysM receptors by changing these residues," says Kasper Røjkjær Andersen.

The long-term goal is to transfer the unique nitrogen-fixing ability that legume plants have into cereal plants to limit the need for polluting commercial nitrogen fertilizers and to benefit and empower the poorest people on Earth. Simona Radutoiu concludes, "We now provide the conceptual understanding required for a stepwise and rational engineering of LysM receptors, which is an essential first step towards this ambitious goal."



More information: 

Zoltan Bozsoki et al. Ligand-recognizing motifs in plant LysM receptors are major determinants of specificity. Science  07 Aug 2020: Vol. 369, Issue 6504, pp. 663-670 DOI: 10.1126/science.abb3377

Study suggests heat waves of the future could kill millions


A team of researchers at the National Bureau of Economic Research has found evidence suggesting that if greenhouse gas emissions are not curbed, future heat waves could kill millions of people across the globe. In their paper published on the NBER website, the group describes how they compared heat related deaths in several countries during past heatwaves with projected future temperatures to learn more about possible deaths in the future.



As the researchers note, excessive heat is one of the deadliest types of extreme weather—in addition to killing people directly through heat stress or stroke, heat can kill people indirectly by pushing the body to work harder to keep cool, which can trigger heart attacks and other ailments. Most victims are older or have underlying conditions. Prior research has shown that heatwaves can be deadly, particularly in places where people do not have the resources to cope.

Heat waves have also been found to be much more extreme in hotter parts of the planet, such as in countries close to the equator. As just one example, parts of the Middle East have been experiencing temperatures as high as 125°F/51°C this summer. In this new effort, the researchers looked at heat related mortality deaths for eight countries (and the European Union) representing different periods of time and different heat waves.

To make their assessments, they averaged them together. They also obtained data regarding how hot the planet is likely to get by the end of this century. They then used math and projection models to estimate how many people would likely die due to heat by the end of this century.

The researchers found that future heatwaves could kill approximately 73 people per 100,000 overall if greenhouse gas emissions continue at their current pace—by 2100. They also found that the hottest parts of the planet could experience as many as 200 deaths per 100,000 by the end of the century. They further noted that most such deaths are likely to happen to those most at risk—poor, older people living in the hottest parts of the world. Without air-conditioning or a cool place to hide out during the hottest parts of the day, they will stand little chance against heat waves that will undoubtedly feature much higher temperatures than todays' heat waves.



More information: 

Valuing the Global Mortality Consequences of Climate Change Accounting for Adaptation Costs and Benefits, NBER Working Paper No. 27599, www.nber.org/papers/w27599

DNA from an ancient, unidentified ancestor was passed down to humans living today


A new analysis of ancient genomes suggests that different branches of the human family tree interbred multiple times, and that some humans carry DNA from an archaic, unknown ancestor. Melissa Hubisz and Amy Williams of Cornell University and Adam Siepel of Cold Spring Harbor Laboratory report these findings in a study published 6th August in PLOS Genetics.



Roughly 50,000 years ago, a group of humans migrated out of Africa and interbred with Neanderthals in Eurasia. But that's not the only time that our ancient human ancestors and their relatives swapped DNA. The sequencing of genomes from Neanderthals and a less well-known ancient group, the Denisovans, has yielded many new insights into these interbreeding events and into the movement of ancient human populations. In the new paper, the researchers developed an algorithm for analyzing genomes that can identify segments of DNA that came from other species, even if that gene flow occurred thousands of years ago and came from an unknown source. They used the algorithm to look at genomes from two Neanderthals, a Denisovan and two African humans. The researchers found evidence that 3 percent of the Neanderthal genome came from ancient humans, and estimate that the interbreeding occurred between 200,000 and 300,000 years ago. Furthermore, 1 percent of the Denisovan genome likely came from an unknown and more distant relative, possibly Homo erectus, and about 15% of these "super-archaic" regions may have been passed down to modern humans who are alive today.

The new findings confirm previously reported cases of gene flow between ancient humans and their relatives, and also point to new instances of interbreeding. Given the number of these events, the researchers say that genetic exchange was likely whenever two groups overlapped in time and space. Their new algorithm solves the challenging problem of identifying tiny remnants of gene flow that occurred hundreds of thousands of years ago, when only a handful of ancient genomes are available. This algorithm may also be useful for studying gene flow in other species where interbreeding occurred, such as in wolves and dogs.

"What I think is exciting about this work is that it demonstrates what you can learn about deep human history by jointly reconstructing the full evolutionary history of a collection of sequences from both modern humans and archaic hominins," said author Adam Siepel. "This new algorithm that Melissa has developed, ARGweaver-D, is able to reach back further in time than any other computational method I've seen. It seems to be especially powerful for detecting ancient introgression."



More information: 

Hubisz MJ, Williams AL, Siepel A (2020) Mapping gene flow between ancient hominins through demography-aware inference of the ancestral recombination graph. PLoS Genet 16(8): e1008895. doi.org/10.1371/journal.pgen.1008895

Thursday, 6 August 2020

Researchers discover new electrocatalyst for turning carbon dioxide into liquid fuel


Catalysts speed up chemical reactions and form the backbone of many industrial processes.  For example, they are essential in transforming heavy oil into gasoline or jet fuel. Today, catalysts are involved in over 80 percent of all manufactured products.



A research team, led by the U.S. Department of Energy's (DOE) Argonne National Laboratory in collaboration with Northern Illinois University, has discovered a new electrocatalyst that converts carbon dioxide (CO2) and water into ethanol with very high energy efficiency, high selectivity for the desired final product and low cost. Ethanol is a particularly desirable commodity because it is an ingredient in nearly all U.S. gasoline and is widely used as an intermediate product in the chemical, pharmaceutical and cosmetics industries.

"The process resulting from our catalyst would contribute to the circular carbon economy, which entails the reuse of carbon dioxide," said Di-Jia Liu, senior chemist in Argonne's Chemical Sciences and Engineering division and a UChicago CASE scientist in the Pritzker School of Molecular Engineering, University of Chicago. This process would do so by electrochemically converting the CO2 emitted from industrial processes, such as fossil fuel power plants or alcohol fermentation plants, into valuable commodities at reasonable cost.

The team's catalyst consists of atomically dispersed copper on a carbon-powder support. By an electrochemical reaction, this catalyst breaks down CO2 and water molecules and selectively reassembles the broken molecules into ethanol under an external electric field. The electrocatalytic selectivity, or "Faradaic efficiency," of the process is over 90 percent, much higher than any other reported process. What is more, the catalyst operates stably over extended operation at low voltage.

"With this research, we've discovered a new catalytic mechanism for converting carbon dioxide and water into ethanol," said Tao Xu, a professor in physical chemistry and nanotechnology from Northern Illinois University. "The mechanism should also provide a foundation for development of highly efficient electrocatalysts for carbon dioxide conversion to a vast array of value-added chemicals."

Because CO2 is a stable molecule, transforming it into a different molecule is normally energy intensive and costly. However, according to Liu, "We could couple the electrochemical process of CO2-to-ethanol conversion using our catalyst to the electric grid and take advantage of the low-cost electricity available from renewable sources like solar and wind during off-peak hours." Because the process runs at low temperature and pressure, it can start and stop rapidly in response to the intermittent supply of the renewable electricity.

The team's research benefited from two DOE Office of Science User Facilities at Argonne—the Advanced Photon Source (APS) and Center for Nanoscale Materials (CNM)—as well as Argonne's Laboratory Computing Resource Center (LCRC). "Thanks to the high photon flux of the X-ray beams at the APS, we have captured the structural changes of the catalyst during the electrochemical reaction,'' said Tao Li, an assistant professor in the Department of Chemistry and Biochemistry at Northern Illinois University and an assistant scientist in Argonne's X-ray Science division. These data along with high-resolution electron microscopy at CNM and computational modeling using the LCRC revealed a reversible transformation from atomically dispersed copper to clusters of three copper atoms each on application of a low voltage. The CO2-to-ethanol catalysis occurs on these tiny copper clusters. This finding is shedding light on ways to further improve the catalyst through rational design.

"We have prepared several new catalysts using this approach and found that they are all highly efficient in converting CO2 to other hydrocarbons," said Liu. "We plan to continue this research in collaboration with industry to advance this promising technology."



 More information: 

Haiping Xu et al, Highly selective electrocatalytic CO2 reduction to ethanol by metallic clusters dynamically formed from atomically dispersed copper, Nature Energy (2020). DOI: 10.1038/s41560-020-0666-x

Wednesday, 5 August 2020

Study suggests embryos could be susceptible to coronavirus


Genes that are thought to play a role in how the SARS-CoV-2 virus infects our cells have been found to be active in embryos as early as during the second week of pregnancy, say scientists at the University of Cambridge and the California Institute of Technology (Caltech). The researchers say this could mean embryos are susceptible to COVID-19 if the mother gets sick, potentially affecting the chances of a successful pregnancy.



While initially recognized as causing respiratory disease, the SARS-CoV-2 virus, which causes COVID-19 disease, also affects many other organs. Advanced age and obesity are risk factors for complications but questions concerning the potential effects on fetal health and successful pregnancy for those infected with SARS-CoV-2 remain largely unanswered.

To examine the risks, a team of researchers used technology developed by Professor Magdalena Zernicka-Goetz at the University of Cambridge to culture human embryos through the stage they normally implant in the body of the mother to look at the activity—or 'expression' - of key genes in the embryo. Their findings are published today in the Royal Society's journal Open Biology.

On the surface of the SARS-CoV-2 virus are large 'spike' proteins. Spike proteins bind to ACE2, a protein receptor found on the surface of cells in our body. Both the spike protein and ACE2 are then cleaved, allowing genetic material from the virus to enter the host cell. The virus manipulates the host cell's machinery to allow the virus to replicate and spread.

The researchers found patterns of expression of the genes ACE2, which provide the genetic code for the SARS-CoV-2 receptor, and TMPRSS2, which provides the code for a molecule that cleaves both the viral spike protein and the ACE2 receptor, allowing infection to occur. These genes were expressed during key stages of the embryo's development, and in parts of the embryo that go on to develop into tissues that interact with the maternal blood supply for nutrient exchange. Gene expression requires that the DNA code is first copied into an RNA message, which then directs the synthesis of the encoded protein. The study reports the finding of the RNA messengers.

Professor Magdalena Zernicka-Goetz, who holds positions at both the University of Cambridge and Caltech, said: "Our work suggests that the human embryo could be susceptible to COVID-19 as early as the second week of pregnancy if the mother gets sick.

A human embryo cultured in vitro through the implantation stages and stained to reveal OCT4 transcription factor, magenta; GATA6 transcription factor, white; F-actin, green; and DNA, blue. Analysis of patterns of gene expression in such embryos reveals that ACE2, the receptor for the SARS-CoV-2 virus, and the TMPRSS2 protease that facilitates viral infection are expressed in these embryos, which represent the very early stages of pregnancy. Credit: Zernicka-Goetz lab


"To know whether this really could happen, it now becomes very important to know whether the ACE2 and TMPRSS2 proteins are made and become correctly positioned at cell surfaces. If these next steps are also taking place, it is possible that the virus could be transmitted from the mother and infect the embryo's cells."

Professor David Glover, also from Cambridge and Caltech, added: "Genes encoding proteins that make cells susceptible to infection by this novel coronavirus become expressed very early on in the embryo's development. This is an important stage when the embryo attaches to the mother's womb and undertakes a major remodeling of all of its tissues and for the first time starts to grow. COVID-19 could affect the ability of the embryo to properly implant into the womb or could have implications for future fetal health."

The team say that further research is required using stem cell models and in non-human primates to better understand the risk. However, they say their findings emphasize the importance for women planning for a family to try to reduce their risk of infection.

"We don't want women to be unduly worried by these findings, but they do reinforce the importance of doing everything they can to minimize their risk of infection," said Bailey Weatherbee, a Ph.D. student at the University of Cambridge.



More information: 

Bailey A. T. Weatherbee et al, Expression of SARS-CoV-2 receptor ACE2 and the protease TMPRSS2 suggests susceptibility of the human embryo in the first trimester, Open Biology (2020). DOI: 10.1098/rsob.200162

Surprisingly dense exoplanet challenges planet formation theories


New detailed observations with NSF's NOIRLab facilities reveal a young exoplanet, orbiting a young star in the Hyades cluster, that is unusually dense for its size and age. Weighing in at 25 Earth-masses, and slightly smaller than Neptune, this exoplanet's existence is at odds with the predictions of leading planet formation theories.



New observations of the exoplanet, known as K2-25b, made with the WIYN 0.9-meter Telescope at Kitt Peak National Observatory (KPNO), a Program of NSF's NOIRLab, the Hobby-Eberly Telescope at McDonald Observatory and other facilities, raise new questions about current theories of planet formation. The exoplanet has been found to be unusually dense for its size and age—raising the question of how it came to exist. Details of the findings appear in The Astronomical Journal.

Slightly smaller than Neptune, K2-25b orbits an M-dwarf star—the most common type of star in the galaxy—in 3.5 days. The planetary system is a member of the Hyades star cluster, a nearby cluster of young stars in the direction of the constellation Taurus. The system is approximately 600 million years old, and is located about 150 light-years from Earth.

Planets with sizes between those of Earth and Neptune are common companions to stars in the Milky Way, despite the fact that no such planets are found in our Solar System. Understanding how these "sub-Neptune" planets form and evolve is a frontier question in studies of exoplanets.

Astronomers predict that giant planets form by first assembling a modest rock-ice core of 5-10 times the mass of Earth and then enrobing themselves in a massive gaseous envelope hundreds of times the mass of Earth. The result is a gas giant like Jupiter. K2-25b breaks all the rules of this conventional picture: With a mass 25 times that of Earth and modest in size, K2-25b is nearly all core and very little gaseous envelope. These strange properties pose two puzzles for astronomers. First, how did K2-25b assemble such a large core, many times the 5-10 Earth-mass limit predicted by theory? And second, with its high core mass—and consequent strong gravitational pull—how did it avoid accumulating a significant gaseous envelope?

The team studying K2-25b found the result surprising. "K2-25b is unusual," said Gudmundur Stefansson, a postdoctoral fellow at Princeton University, who led the research team. According to Stefansson, the exoplanet is smaller in size than Neptune but about 1.5 times more massive. "The planet is dense for its size and age, in contrast to other young, sub-Neptune-sized planets that orbit close to their host star," said Stefansson. "Usually these worlds are observed to have low densities—and some even have extended evaporating atmospheres. K2-25b, with the measurements in hand, seems to have a dense core, either rocky or water-rich, with a thin envelope."



To explore the nature and origin of K2-25b, astronomers determined its mass and density. Although the exoplanet's size was initially measured with NASA's Kepler satellite, the size measurement was refined using high-precision measurements from the WIYN 0.9-meter Telescope at KPNO and the 3.5-meter telescope at Apache Point Observatory (APO) in New Mexico. The observations made with these two telescopes took advantage of a simple but effective technique that was developed as part of Stefansson's doctoral thesis. The technique uses a clever optical component called an Engineered Diffuser, which can be obtained off the shelf for around $500. It spreads out the light from the star to cover more pixels on the camera, allowing the brightness of the star during the planet's transit to be more accurately measured, and resulting in a higher-precision measurement of the size of the orbiting planet, among other parameters.

An example of a 5 cm by 5 cm (2 inch by 2 inch) Engineered Diffuser. Credit: Gudmundur Stefansson/RPC Photonics


"The innovative diffuser allowed us to better define the shape of the transit and thereby further constrain the size, density and composition of the planet," said Jayadev Rajagopal, an astronomer at NOIRLab who was also involved in the study.

For its low cost, the diffuser delivers an outsized scientific return. "Smaller aperture telescopes, when equipped with state-of-the-art, but inexpensive, equipment can be platforms for high impact science programs," explains Rajagopal. "Very accurate photometry will be in demand for exploring host stars and planets in tandem with space missions and larger apertures from the ground, and this is an illustration of the role that a modest-sized 0.9-meter telescope can play in that effort."

Thanks to the observations with the diffusers available on the WIYN 0.9-meter and APO 3.5-meter telescopes, astronomers are now able to predict with greater precision when K2-25b will transit its host star. Whereas before transits could only be predicted with a timing precision of 30-40 minutes, they are now known with a precision of 20 seconds. The improvement is critical to planning follow-up observations with facilities such as the international Gemini Observatory and the James Webb Space Telescope.

Many of the authors of this study are also involved in another exoplanet-hunting project at KPNO: the NEID spectrometer on the WIYN 3.5-meter Telescope. NEID enables astronomers to measure the motion of nearby stars with extreme precision—roughly three times better than the previous generation of state-of-the-art instruments—allowing them to detect, determine the mass of, and characterize exoplanets as small as Earth.



More information: 

The Habitable-zone Planet Finder Reveals A High Mass and a Low Obliquity for the Young Neptune K2-25b, arXiv:2007.12766 [astro-ph.EP] arxiv.org/abs/2007.12766

Tuesday, 4 August 2020

Large study confirms vitamin D does not reduce risk of depression in adults


Vitamin D supplementation does not protect against depression in middle-age or older adulthood according results from one of the largest ever studies of its kind. This is a longstanding question that has likely encouraged some people to take the vitamin.



In this study, however, "There was no significant benefit from the supplement for this purpose. It did not prevent depression or improve mood," says Olivia I. Okereke, MD, MS, of Massachusetts General Hospital (MGH's Psychiatry Department.

Okereke is the lead author of the report and principal investigator of this study, which will be published in JAMA on Aug. 4. It included more than 18,000 men and women aged 50 years or older. Half the participants received vitamin D3 (cholecalciferol) supplementation for an average of five years, and the other half received a matching placebo for the same duration.

Vitamin D is sometimes called the "sunshine vitamin" because the skin can naturally create it when exposed to sunlight. Numerous prior studies showed that low blood levels of vitamin D (25-hydroxy vitamin D) were associated with higher risk for depression in later life, but there have been few large-scale randomized trials necessary to determine causation. Now Okereke and her colleagues have delivered what may be the definitive answer to this question.

"One scientific issue is that you actually need a very large number of study participants to tell whether or not a treatment is helping to prevent development of depression," Okereke explains. "With nearly 20,000 people, our study was statistically powered to address this issue."

This study, called VITAL-DEP (Depression Endpoint Prevention in the Vitamin D and Omega-3 Trial), was an ancillary study to VITAL, a randomized clinical trial of cardiovascular disease and cancer prevention among nearly 26,000 people in the US.

From that group, Okereke and her colleagues studied the 18,353 men and women who did not already have any indication of clinical depression to start with, and then tested whether vitamin D3 prevented them from becoming depressed."

The results were clear. Among the 18,353 randomized participants, the researchers found the risk of depression or clinically relevant depressive symptoms was not significantly different between those receiving active vitamin D3 supplements and those on placebo, and there were no significant differences were seen between treatment groups in mood scores over time.

"It's not time to throw out your vitamin D yet though, at least not without your doctor's advice," says Okereke. Some people take it for reasons other than to elevate mood.

"Vitamin D is known to be essential for bone and metabolic health, but randomized trials have cast doubt on many of the other presumed benefits," said the paper's senior author, JoAnn Manson, MD, DrPH, at Brigham and Women's Hospital.



Reference: 

Effect of Long-term Vitamin D3 Supplementation vs Placebo on Risk of Depression or Clinically Relevant Depressive Symptoms and on Change in Mood Scores
A Randomized Clinical Trial
JAMA. 2020;324(5):471-480.  doi: 10.1001/jama.2020.10224

A new test to investigate the origin of cosmic structure


Many cosmologists believe that the universe's structure is a result of quantum fluctuations that occurred during early expansion. Confirming this hypothesis, however, has proven highly challenging so far, as it is hard to discern between quantum and classical primordial fluctuations when analyzing existing cosmological data.



Two researchers at University of California and Deutsches Elektronen-Synchrotron DESY in Germany have recently devised a test based on the notion of primordial non-Gaussianity that could help to ascertain the origin of cosmic structure. In their paper, published in Physical Review Letters, they argue that detecting primordial non-Gaussanity could help to determine whether the patterns of the universe originated from quantum or classical fluctuations.

"One of the most beautiful ideas in all of science is that the structure we observed in the cosmos resulted from quantum fluctuations in the very early universe that were then stretched by a rapid accelerated expansion," Rafael Porto, one of the researchers who carried out the study, told Phys.org. "This 'inflationary' paradigm makes a lot of predictions which have been corroborated by data, yet the quantum nature of the primordial seed is extremely difficult to demonstrate directly."

The main reason that demonstrating the quantum origin of the universe's structure is so difficult is that inflation could have also stretched classical perturbations, resulting in a very similar galaxy distribution. In their paper, Porto and his colleague Daniel Green introduced the idea that while quantum and classical fluctuations would have resulted in similar galaxy distributions, some particular patterns would differ in structures of a quantum origin. Observing these patterns could therefore allow researchers to test the origin of cosmic structure.

"Much of the formalism we used to study the patterns of galaxies in the sky is similar to the way particle physicists study scattering processes at colliders," Porto explained. "In cosmology we talk about 'correlations,' while in particle physics we talk about 'amplitudes,' but there's a lot in common between the two. Using some basic physical principles and symmetries, we demonstrated that classical mechanisms would have produced a large number of particles and as a result a very specific signature in the pattern of galaxies, such as 'bumps' in collider data."



Porto and Green showed that a cosmological signature resembling the presence of 'bumps' in collider data may indicate that the structure of the universe originated from classical fluctuations. On the other hand, the absence of these 'bumps' would suggest that zero-point quantum fluctuations were the key agents behind the formation of cosmic structure.

"People have tried to find a signature for the quantum origin of structure before and found that the effect is suppressed by 115 orders of magnitude, that's a 0.…. 115 times… 1 effect," Porto added. "We have shown that, while this is difficult to observe due to contamination from other sources during the process of structure formation, if there's a primordial signal at all, the effect of classical perturbations is order 1. This means that we have achieved an improvement of 115 orders of magnitude over previous proposals."

In recent decades, cosmologists investigating the origin of the universe's structure have primarily been looking for the so-called 'B mode' polarization in the cosmic microwave background (CMB), as this polarization could be a product of primordial quantum gravitational effects during inflation. Rather than looking for the 'B mode' polarization as an indicator of quantum gravitational effects, Porto and Green turned the problem around and found that another pattern, known as the "folded configuration for the correlation functions," carries the seed of classical fluctuations.

"There is a long history of people testing quantum mechanics in the lab using something called Bell's inequalities," Green told Phys.org. "The essential idea is that, if you have a quantum system, there are certain kinds of measurements you can do that will expose the true quantum mechanical nature of the state. The challenge in cosmology is that (1) the universe we observe is basically classical and (2) we can't perform 'experiments,' as we don't get to manipulate the state of the universe. The novelty of our work is that we showed you can still tell that it came from a quantum mechanical state in the distant past, despite these large obstacles."

Porto and Green's recent study introduces a new method to test the hypothesis that the universe's structure is of a quantum nature. Essentially, the researchers theorize that if one cannot observe a 'bump' in the so-called folded configuration of non-Gaussian correlation functions, the structure of the universe would have originated from quantum zero fluctuations, as in classical physics, the vacuum is empty.

The litmus test introduced in their paper differs greatly from previously proposed tests of quantum mechanics and thus circumvents many of the issues associated with these tests. In their future work, Porto and Green plan to investigate whether their test could also be applied to lab-based experiments on quantum systems.

"Dan and I are now also thinking about how quantum information ideas can further pinpoint the nature of the primordial seed and in more practical terms also help us provide a faster algorithm to simulate the evolution of the universe, perhaps as quantum computers will do one day," Porto said.



More information: 

Daniel Green et al. Signals of a Quantum Universe, Physical Review Letters (2020). DOI: 10.1103/PhysRevLett.124.251302

Monday, 3 August 2020

Early Mars was covered in ice sheets, not flowing rivers: study


A large number of the valley networks scarring Mars's surface were carved by water melting beneath glacial ice, not by free-flowing rivers as previously thought, according to new UBC research published today in Nature Geoscience. The findings effectively throw cold water on the dominant "warm and wet ancient Mars" hypothesis, which postulates that rivers, rainfall and oceans once existed on the red planet.



To reach this conclusion, lead author Anna Grau Galofre, former Ph.D. student in the department of earth, ocean and atmospheric sciences, developed and used new techniques to examine thousands of Martian valleys. She and her co-authors also compared the Martian valleys to the subglacial channels in the Canadian Arctic Archipelago and uncovered striking similarities.

"For the last 40 years, since Mars's valleys were first discovered, the assumption was that rivers once flowed on Mars, eroding and originating all of these valleys," says Grau Galofre. "But there are hundreds of valleys on Mars, and they look very different from each other. If you look at Earth from a satellite you see a lot of valleys: some of them made by rivers, some made by glaciers, some made by other processes, and each type has a distinctive shape. Mars is similar, in that valleys look very different from each other, suggesting that many processes were at play to carve them."

The similarity between many Martian valleys and the subglacial channels on Devon Island in the Canadian Arctic motivated the authors to conduct their comparative study. "Devon Island is one of the best analogues we have for Mars here on Earth—it is a cold, dry, polar desert, and the glaciation is largely cold-based," says co-author Gordon Osinski, professor in Western University's department of earth sciences and Institute for Earth and Space Exploration.

Collage showing Mars's Maumee valleys (top half) superimposed with channels on Devon Island in Nunavut (bottom half). The shape of the channels, as well as the overall network, appears almost identical. Credit: Anna Grau Galofre

In total, the researchers analyzed more than 10,000 Martian valleys, using a novel algorithm to infer their underlying erosion processes. "These results are the first evidence for extensive subglacial erosion driven by channelized meltwater drainage beneath an ancient ice sheet on Mars," says co-author Mark Jellinek, professor in UBC's department of earth, ocean and atmospheric sciences. "The findings demonstrate that only a fraction of valley networks match patterns typical of surface water erosion, which is in marked contrast to the conventional view. Using the geomorphology of Mars' surface to rigorously reconstruct the character and evolution of the planet in a statistically meaningful way is, frankly, revolutionary."



Grau Galofre's theory also helps explain how the valleys would have formed 3.8 billion years ago on a planet that is further away from the sun than Earth, during a time when the sun was less intense. "Climate modelling predicts that Mars' ancient climate was much cooler during the time of valley network formation," says Grau Galofre, currently a SESE Exploration Post-doctoral Fellow at Arizona State University. "We tried to put everything together and bring up a hypothesis that hadn't really been considered: that channels and valleys networks can form under ice sheets, as part of the drainage system that forms naturally under an ice sheet when there's water accumulated at the base."

These environments would also support better survival conditions for possible ancient life on Mars. A sheet of ice would lend more protection and stability of underlying water, as well as providing shelter from solar radiation in the absence of a magnetic field—something Mars once had, but which disappeared billions of years ago.

While Grau Galofre's research was focused on Mars, the analytical tools she developed for this work can be applied to uncover more about the early history of our own planet. Jellinek says he intends to use these new algorithms to analyze and explore erosion features left over from very early Earth history.

"Currently we can reconstruct rigorously the history of global glaciation on Earth going back about a million to five million years," says Jellinek. "Anna's work will enable us to explore the advance and retreat of ice sheets back to at least 35 million years ago—to the beginnings of Antarctica, or earlier—back in time well before the age of our oldest ice cores. These are very elegant analytical tools."



More information: 

Valley formation on early Mars by subglacial and fluvial erosion, Nature Geoscience (2020). DOI: 10.1038/s41561-020-0618-x

Energy demands limit our brains' information processing capacity


Our brains have an upper limit on how much they can process at once due to a constant but limited energy supply, according to a new UCL study using a brain imaging method that measures cellular metabolism.



The study, published in the Journal of Neuroscience, found that paying attention can change how the brain allocates its limited energy; as the brain uses more energy in processing what we attend to, less energy is supplied to processing outside our attention focus.

Explaining the research, senior author Professor Nilli Lavie (UCL Institute of Cognitive Neuroscience) said: "It takes a lot of energy to run the human brain. We know that the brain constantly uses around 20% of our metabolic energy, even while we rest our mind, and yet it's widely believed that this constant but limited supply of energy does not increase when there is more for our mind to process.

"If there's a hard limit on energy supply to the brain, we suspected that the brain may handle challenging tasks by diverting energy away from other functions, and prioritizing the focus of our attention.

"Our findings suggest that the brain does indeed allocate less energy to the neurons that respond to information outside the focus of our attention when our task becomes harder. This explains why we experience inattentional blindness and deafness even to critical information that we really want to be aware of."

The research team of cognitive neuroscientists and biomedical engineers measured cerebral metabolism with a non-invasive optical imaging method. In this way they could see how much energy brain regions use as people focus attention on a task, and how that changes when the task becomes more mentally demanding. They used broadband near-infrared spectroscopy to measure the oxidation levels of an enzyme involved in energy metabolism in brain cells' mitochondria, the energy generators that power each cell's biochemical reactions.

The researchers employed their technique to measure brain metabolism in different regions of the visual cortex in the brains of 18 people as they carried out visual search tasks that were either complex or simple, while sometimes also presented with a visual distraction that was irrelevant to the task.

They identified elevated cellular metabolism in the brain areas responsive to the attended task stimuli as the task was more complex, and these increases were directly mirrored with reduced cellular metabolism levels in areas responding to unattended stimuli. This push-pull pattern was closely synchronised, showing a trade-off of limited energy supply between attended and unattended processing.

Co-author Professor Ilias Tachtsidis (UCL Medical Physics & Biomedical Engineering) said: "By using our in-house developed broadband near-infrared spectroscopy, an optical brain monitoring technology we developed at UCL, we were better able to measure an enzyme in the mitochondria (the power factory of the cells) that plays an integral part in metabolism."

First author, Ph.D. student Merit Bruckmaier (UCL Institute of Cognitive Neuroscience) said: "Using these methods, our conclusions about brain energy usage are more direct and telling than in past studies using fMRI imaging methods that measure cerebral blood oxygenation levels instead of an intracellular marker of metabolism."

Professor Lavie said: "In this way, we have managed to connect people's experience of brain overload to what's going on inside their neurons, as high energy demands for one purpose are balanced out by reduced energy use related to any other purpose. If we try to process too much information we may feel the strain of overload because of the hard limit on our brain capacity.

"During recent months, we've heard from a lot of people who say they're feeling overwhelmed, with constant news updates and new challenges to overcome. When your brain is at capacity, you are likely to fail to process some information. You might not even notice an important email come in because your child was speaking to you, or you might miss the oven timer go off because you received an unexpected work call. Our findings may explain these often-frustrating experiences of inattentional blindness or deafness."



More information: 

Attention and capacity limits in perception: A cellular metabolism account
Merit Bruckmaier, Ilias Tachtsidis, Phong Phan and Nilli Lavie Journal of Neuroscience (2020). DOI: 10.1523/JNEUROSCI.2368-19.2020

Sunday, 2 August 2020

Room temperature superconductivity creeping toward possibility


The possibility of achieving room temperature superconductivity took a tiny step forward with a recent discovery by a team of Penn State physicists and materials scientists.



The surprising discovery involved layering a two-dimensional material called molybdenum sulfide with another material called molybdenum carbide. Molybdenum carbide is a known superconductor — electrons can flow through the material without any resistance. Even the best of metals, such as silver or copper, lose energy through heat. This loss makes long-distance transmission of electricity more costly.

“Superconductivity occurs at very low temperatures, close to absolute zero or 0 Kelvin,” said Mauricio Terrones, corresponding author on a paper in Proceedings of the National Academy of Sciences published this week. “The alpha phase of Moly carbide is superconducting at 4 Kelvin.”

When layering metastable phases of molybdenum carbide with molybdenum sulfide, superconductivity occurs at 6 Kelvin, a 50% increase. Although this is not remarkable in itself — other materials have been shown to be superconductive at temperatures as high as 150 Kelvin — it was still an unexpected phenomenon that portends a new method to increase superconductivity at higher temperatures in other superconducting materials.

The team used modeling techniques to understand how the effect occurred experimentally.

“Calculations using quantum mechanics as implemented within density functional theory assisted in the interpretation of experimental measurements to determine the structure of the buried molybdenum carbide/molybdenum sulfide interfaces," said Susan Sinnott, professor of materials science and engineering and head of the department. "This work is a nice example of the way in which materials synthesis, characterization and modeling can come together to advance the discovery of new material systems with unique properties.”

According to Terrones, “It’s a fundamental discovery, but not one anyone believed would work. We are observing a phenomenon that to the best of our knowledge has never been observed before.”

The team will continue experimenting with superconductive materials with the goal of someday finding materials combinations that can carry energy through the grid with zero resistance.



Reference:

Fu Zhang, Wenkai Zheng, Yanfu Lu, Lavish Pabbi, Kazunori Fujisawa, Ana Laura Elías, Anna R. Binion, Tomotaroh Granzier-Nakajima, Tianyi Zhang, Yu Lei, Zhong Lin, Eric W. Hudson, Susan B. Sinnott, Luis Balicas, Mauricio Terrones. Superconductivity enhancement in phase-engineered molybdenum carbide/disulfide vertical heterostructures. Proceedings of the National Academy of Sciences, 2020; 202003422 DOI: 10.1073/pnas.2003422117

Saturday, 1 August 2020

Scientists discover new class of semiconducting entropy-stabilized materials


Semiconductors are important materials in numerous functional applications such as digital and analog electronics, solar cells, LEDs, and lasers. Semiconducting alloys are particularly useful for these applications since their properties can be engineered by tuning the mixing ratio or the alloy ingredients. However, the synthesis of multicomponent semiconductor alloys has been a big challenge due to thermodynamic phase segregation of the alloy into separate phases. Recently, University of Michigan researchers Emmanouil (Manos) Kioupakis and Pierre F. P. Poudeu, both in the Materials Science and Engineering Department, utilized entropy to stabilize a new class of semiconducting materials, based on GeSnPbSSeTe high-entropy chalcogenide alloys, a discovery that paves the way for wider adoption of entropy-stabilized semiconductors in functional applications. Their article, "Semiconducting high-entropy chalcogenide alloys with ambi-ionic entropy stabilization and ambipolar doping" was recently published in the journal Chemistry of Materials.



Entropy, a thermodynamic quantity that quantifies the degree of disorder in a material, has been exploited to synthesize a vast array of novel materials by mixing eachcomponent in an equimolar fashion, from high-entropy metallic alloys to entropy-stabilized ceramics. Despite having a large enthalpy of mixing, these materials can surprisingly crystalize in a single crystal structure, enabled by the large configurational entropy in the lattice. Kioupakis and Poudeu hypothesized that this principle of entropy stabilization can be applied to overcome the synthesis challenges of semiconducting alloys that prefer to segregation into thermodynamically more stable compounds. They tested their hypothesis on a 6-component II-VI chalcogenide alloy derived from the PbTe structure by mixing Ge, Sn, and Pb on the cation site, and S, Se, and Te on the anion site.

Using high throughput first-principles calculations, Kioupakis uncovered the complex interplay between the enthalpy and entropy in GeSnPbSSeTe high-entropy chalcogenide alloys. He found that the large configurational entropy from both anion and cation sublattices stabilizes the alloys into single-phase rocksalt solid solutions at the growth temperature. Despite being metastable at room temperature, these solid solutions can be preserved by fast cooling under ambient conditions. Poudeu later verified the theory predictions by synthesizing the equimolar composition (Ge1/3Sn1/3Pb1/3S1/3Se1/3Te1/3) by a two-step solid-state reaction followed by fast quenching in liquid nitrogen. The synthesized power showed well-defined XRD patterns corresponding to a pure rocksalt structure. Furthermore, they observed reversible phase transition between single-phase solid solution and multiple-phase segregation from DSC analysis and temperature dependent XRD, which is a key feature of entropy stabilization.

What makes high-entropy chalcogenide intriguing is their functional properties. Previously discovered high-entropy materials are either conducting metals or insulating ceramics, with a clear dearth in the semiconducting regime. Kioupakis and Poudeu found that. the equimolar GeSnPbSSeTe is an ambipolarly dopable semiconductor, with evidence from a calculated band gap of 0.86 eV and sign reversal of the measured Seebeck coefficient upon p-type doping with Na acceptors and n-type doping with Bi donors. The alloy also exhibits an ultralow thermal conductivity that is nearly independent of temperature. These fascinating functional properties make GeSnPbSSeTe a promising new material to be deployed in electronic, optoelectronic, photovoltaic, and thermoelectric devices.

Entropy stabilization is a general and powerful method to realize a vast array of materials compositions. The discovery of entropy stabilization in semiconducting chalcogenide alloys by the team at UM is only the tip of the iceberg that can pave the way for novel functional applications of entropy-stabilized materials.



More information:

Zihao Deng et al, Semiconducting High-Entropy Chalcogenide Alloys with Ambi-ionic Entropy Stabilization and Ambipolar Doping, Chemistry of Materials (2020). DOI: 10.1021/acs.chemmater.0c01555

How human sperm really swim: New research challenges centuries-old assumption


A breakthrough in fertility science by researchers from Bristol and Mexico has shattered the universally accepted view of how sperm 'swim'.



More than three hundred years after Antonie van Leeuwenhoek used one of the earliest microscopes to describe human sperm as having a "tail, which, when swimming, lashes with a snakelike movement, like eels in water", scientists have revealed this is an optical illusion.

Using state-of-the-art 3-D microscopy and mathematics, Dr. Hermes Gadelha from the University of Bristol, Dr. Gabriel Corkidi and Dr. Alberto Darszon from the Universidad Nacional Autonoma de Mexico, have pioneered the reconstruction of the true movement of the sperm tail in 3-D.

Using a high-speed camera capable of recording over 55,000 frames in one second, and a microscope stage with a piezoelectric device to move the sample up and down at an incredibly high rate, they were able to scan the sperm swimming freely in 3-D.

The ground-breaking study, published in the journal Science Advances, reveals the sperm tail is in fact wonky and only wiggles on one side. While this should mean the sperm's one-sided stroke would have it swimming in circles, sperm have found a clever way to adapt and swim forwards.


"Human sperm figured out if they roll as they swim, much like playful otters corkscrewing through water, their one-sided stoke would average itself out, and they would swim forwards," said Dr. Gadelha, head of the Polymaths Laboratory at Bristol's Department of Engineering Mathematics and an expert in the mathematics of fertility.

"The sperms' rapid and highly synchronized spinning causes an illusion when seen from above with 2-D microscopes—the tail appears to have a side-to-side symmetric movement, "like eels in water", as described by Leeuwenhoek in the 17th century.

"However, our discovery shows sperm have developed a swimming technique to compensate for their lop-sidedness and in doing so have ingeniously solved a mathematical puzzle at a microscopic scale: by creating symmetry out of asymmetry," said Dr. Gadelha.

"The otter-like spinning of human sperm is however complex: the sperm head spins at the same time that the sperm tail rotates around the swimming direction. This is known in physics as precession, much like when the orbits of Earth and Mars precess around the sun."

Computer-assisted semen analysis systems in use today, both in clinics and for research, still use 2-D views to look at sperm movement. Therefore, like Leeuwenhoek's first microscope, they are still prone to this illusion of symmetry while assessing semen quality. This discovery, with its novel use of 3-D microscope technology combined with mathematics, may provide fresh hope for unlocking the secrets of human reproduction.

"With over half of infertility caused by male factors, understanding the human sperm tail is fundamental to developing future diagnostic tools to identify unhealthy sperm," adds Dr. Gadelha, whose work has previously revealed the biomechanics of sperm bendiness and the precise rhythmic tendencies that characterize how a sperm moves forward.

Dr. Corkidi and Dr. Darszon pioneered the 3-D microscopy for sperm swimming.

"This was an incredible surprise, and we believe our state-of the-art 3-D microscope will unveil many more hidden secrets in nature. One day this technology will become available to clinical centers," said Dr. Corkidi.

"This discovery will revolutionize our understanding of sperm motility and its impact on natural fertilization. So little is known about the intricate environment inside the female reproductive tract and how sperm swimming impinge on fertilization. These new tools open our eyes to the amazing capabilities sperm have," said Dr. Darszon.



More information: 

"Human sperm uses asymmetric and anisotropic flagellar controls to regulate swimming symmetry and cell steering" Science Advances (2020). DOI: 10.1126/sciadv.aba5168

Thursday, 30 July 2020

Scientists make quantum technology smaller


A way of shrinking the devices used in quantum sensing systems has been developed by researchers at the UK Quantum Technology Hub Sensors and Timing, which is led by the University of Birmingham.



Sensing devices have a huge number of industrial uses, from carrying out ground surveys to monitoring volcanoes. Scientists working on ways to improve the capabilities of these sensors are now using quantum technologies, based on cold atoms, to improve their sensitivity.

Machines developed in laboratories using quantum technology, however, are cumbersome and difficult to transport, making current designs unsuitable for most industrial uses.

The team of researchers has used a new approach that will enable quantum sensors to shrink to a fraction of their current size. The research was conducted by an international team led by University of Birmingham and SUSTech in China in collaboration with Paderborn University in Germany. Their results are published in Science Advances.

The quantum technology currently used in sensing devices works by finely controlling laser beams to engineer and manipulate atoms at super-cold temperatures. To manage this, the atoms have to be contained within a vacuum-sealed chamber where they can be cooled to the desired temperatures.

A key challenge in miniaturising the instruments is in reducing the space required by the laser beams, which typically need to be arranged in three pairs, set at angles. The lasers cool the atoms by firing photons against the moving atom, lowering its momentum and therefore cooling it down.

The new findings show how a new technique can be used to reduce the space needed for the laser delivery system. The method uses devices called optical metasurfaces – manufactured structures that can be used to control light.

A metasurface optical chip can be designed to diffract a single beam into five separate, well-balanced and uniform beams that are used to supercool the atoms. This single chip can replace the complex optical devices that currently make up the cooling system.

Metasurface photonic devices have inspired a range of novel research activities in the past few years and this is the first time researchers have been able to demonstrate its potential in cold atom quantum devices.

Dr Yu-Hung Lien, lead author of the study, says: “The mission of the UK Quantum Technology Hub is to deliver technologies that can be adopted and used by industry. Designing devices that are small enough to be portable or which can fit into industrial processes and practices is vital. This new approach represents a significant step forward in this approach.”

The team have succeeded in producing an optical chip that measures just 0.5mm across, resulting in a platform for future sensing devices measuring about 30cm cubed. The next step will be to optimise the size and the performance of the platform to produce the maximum sensitivity for each application.



Reference:

Lingxiao Zhu, Xuan Liu, Basudeb Sain, Mengyao Wang, Christian Schlickriede, Yutao Tang, Junhong Deng, Kingfai Li, Jun Yang, Michael Holynski, Shuang Zhang, Thomas Zentgraf, Kai Bongs, Yu-Hung Lien, Guixin Li. A dielectric metasurface optical chip for the generation of cold atoms. Science Advances, 2020; 6 (31): eabb6667 DOI: 10.1126/sciadv.abb6667

About Us

we are bunch of Guys who Love Science and Technology, we are aware that our site looks junk but we don't have enough money to make our website much better, Your support will keep us growing, Please visit Just One or Two ADS in the site that will be considered as donation and Please keep Sharing and join us on Facebook and Twitter. Thank you for reading our articles, Please leave comments if you feel anything, Have a Nice Life