Your source for the latest science & space press releases

Sunday, 2 January 2022

What the thermodynamics of clocks tell us about the mysteries of time


Surprising new insights about the strange physics underlying how clocks work could transform our understanding of time’s arrow – and hint at how time works at the quantum scale

A CENTURY ago, two intellectual giants met to debate the nature of time. One was the French philosopher Henri Bergson, a superstar whose fans caused the first traffic jam on Broadway in New York as they flocked to one of his earlier appearances. He believed there was more to time than something that can be measured by clocks, captured by mathematics or explained by psychology. He argued that the way we experience it, with a duration and direction, could only be revealed through philosophy.

Bergson’s opponent, a physicist called Albert Einstein, disagreed. After developing his theories of relativity he believed time was a physical entity, separate from human consciousness, that could speed up or slow down. Einstein thought that time was interwoven in space in a static cosmos called the block universe which lacks a clear past, present or future.

Almost 100 years later, the question of why the time we perceive is so different from the time postulated in physics is still hotly debated. Now, fresh clues are starting to suggest the devices we use to measure time might be crucial to arriving at an answer.

Those clues relate to the fact that in general relativity, clocks are incorporated into the theory as perfectly idealised objects, with smooth readings that are accurate no matter how much you zoom in, when they actually are anything but. “Clocks are physical things which are made up of physical systems, and so we kind of know that idealisation can’t be right,” says Emily Adlam at the Rotman Institute of Philosophy at Western University in Canada. “A more realistic understanding of clocks may ultimately be the key to understanding time.”

We can measure time using anything that goes through a change – sundials use the shifting sun, water clocks tap the flow of water and even the temperature of a cup of tea can help us estimate when it was brewed. Today, we mostly use sophisticated mechanical and atomic clocks, which can measure time much more accurately than a cup of tea, because they tick reliably with a certain frequency.

Since astronomer Christiaan Huygens invented the first pendulum clock in the 17th century, we have been steadily improving the accuracy of scientific clocks, with phenomenal results. Nowadays, the most impressive machines can measure each second so accurately that they wouldn’t miss a beat in 20 billion years, longer than the age of the universe. Impressive. But it turns out there may be a price to pay for such accuracy.

To produce their ticks, clocks need a source of energy. A grandfather clock must be wound up and a wall clock is powered by a battery. The most accurate atomic clocks, with ticks that correspond to electromagnetic signals given off by atoms changing energy levels, are driven by high-powered lasers.

This isn’t surprising. But rather than just requiring energy to run each mechanical part, new research suggests something more might be at play. Clocks could be a type of so-called thermodynamic machine, with fundamental constraints on their performance set by the underlying physics. If this is true, not only will it mean there could be a limit to how accurately we can measure time’s passing, it “will have a huge impact on how philosophers think about time”, says Gerard Milburn, a quantum physicist at the University of Queensland, Australia.

We know of two types of thermodynamic machine. The first comprises heat engines – things like fridges and combustion engines – which have a maximum efficiency set by thermodynamics. The second group encompasses information storage devices, like DNA and hard discs. In these, thermodynamics tells us the cost of erasing information. If clocks are a third, it would mean there are limits on how accurately we can tell the time, due to the involvement of energy’s messy cousin, entropy.

The maximum efficiency of heat engines was determined by engineer Sadi Carnot in 1824, before entropy was defined. But his calculation paved the way for the discovery of the second law of thermodynamics, which says any closed system – something that nothing can enter or leave – will increase in entropy, a measure of disorder or randomness, over time.

Low entropy means high order. If the atoms in a box of gas clustered in one corner rather than being spread out chaotically, entropy would be low. But because there are fewer ways for atoms to be ordered than disordered, making the latter more likely, closed systems – like the universe – tend towards disorder. A cup of hot tea loses heat to its surroundings, raising overall entropy, but never spontaneously heats up. This creates an arrow of time.

The second law is the only law of physics in which rules are irreversible in time. Because of this, thermodynamics is used to explain the arrow of time we perceive. But the second law doesn’t tell the whole picture. There is still a question of why we only ever experience time moving forwards – many physicists today argue that this is simply an illusion.

If the arrow of time in thermodynamics could be linked to the practical reality of measuring time, then thermodynamics could help explain how we perceive time after all, says Adlam. What is needed, she says, is a direct link between thermodynamics and practical timekeeping – something explaining why all clocks run in the same direction as the entropy increase of the universe. Find this link, and we might just answer some of the questions Einstein and Bergson were at odds over. In search of this connection, a handful of researchers are turning to clocks.

A few years ago, Paul Erker at the Institute for Quantum Optics and Quantum Information in Vienna, Austria, teamed up with Marcus Huber at the Vienna University of Technology in an attempt to understand what clocks really are. They started off by modelling quantum clocks, simple systems in which the flow of energy is easy to track. In a 2017 paper, they and their colleagues showed a clock made of just three atoms – one hot, one cold and one “ticking” thanks to the energy flow between the two others – should dissipate more energy the more accurate it is. This was a big step, but still purely theoretical. By 2020, they were ready to test it.

Teaming up with researchers including Natalia Ares at the University of Oxford and Edward Laird at Lancaster University, both in the UK, the researchers built a simple pendulum clock from a suspended membrane of silicon nitride with a thickness of about 50 nanometres. “You could think of it more like a drum than a pendulum,” says Laird. They made their tiny drum vibrate, with each vibration corresponding to one tick of the clock. The strength of the vibrations could be increased by applying an electric field. To determine the clock’s accuracy – how regularly the ticks occurred – they connected it to an electrical circuit including a voltmeter. “It is a beautiful experiment,” says Milburn.

The crux of that experiment was that the clock became more accurate as more energy was supplied to the drum. And the more accurate it was, the more entropy it produced. This was the first result to explain why clocks move forwards in time, because as they measure time, they increase entropy, an irreversible process. “This research gives a very nice explicit link between the thermodynamic arrow of time and perceptual time,” says Adlam.

Carlo Rovelli at Aix-Marseille University in France agrees the work sharpens our understanding of the strict relationship between time and heat. “Simply put, if there is no heat involved, there is no way to distinguish the past from the future,” he says. The research strengthens his thermal time hypothesis, which argues that time emerges from the laws of thermodynamics on the macroscopic scale of humans, regardless of what is going on at a microscopic level.

Crucially, the research also shows that the arrow of time isn’t something only humans can experience. “It doesn’t really matter if it’s a conscious agent who observes the clock or a device, such as a detector,” says Huber. The entropy still increases. “It’s true for anything.” Rather than being a consequence of our consciousness, this suggests the way we perceive time may be physically built into the process of timekeeping. If so, Bergson’s argument falls apart and Einstein looks right to have believed time is a physical entity.

This isn’t the first time a link between energy cost and the accuracy of clocks has been explored. A similar relationship between accuracy and energy cost has been seen in the biochemical clocks that operate inside ocean-dwelling cyanobacteria, helping them generate the chemicals needed for photosynthesis early in the morning before the sun rises. This is partly because they are living organisms, not mechanical clocks. “Evolution probably places additional constraints on what it means for a clock to be good, beyond the energetic constraints of precision,” says Jordan Horowitz at the University of Michigan.

But not all clocks entirely follow the rules, it would seem. The most accurate atomic clocks appear more efficient than the research predicts. These clocks involve complex circuits, detectors and feedback, making their energy flow difficult to model. Both Erker and Huber are confident they will be shown to obey the same constraint. “I’m not able to prove this statement yet,” says Erker. “But my hunch definitely goes in this direction.”

If he’s right, it would have meaning beyond proving an arrow of time exists outside of our consciousness. The link between clocks and thermodynamics may also reflect time on a smaller scale. If there is a limit on how accurately we can resolve time, could this be a sign that time itself isn’t perfectly smooth, but instead lumpy – packed into tiny units in the same way that light comes in photons?

Answering this could be tricky. To probe space-time at this tiniest of scales, below those we can currently reach with our best particle accelerators, would require vast amounts of energy. At a certain level of energy, you would expect to create a black hole that would swallow the entire experiment, suggesting it is impossible to resolve time perfectly. “You end up with a sort of fundamental limit on the sensitivity to which you can measure a time interval,” says Adlam. This might be related to the limit caused by thermodynamics, she says, but the link isn’t clear yet.

Probing time at minuscule scales is exciting, but what Huber is most thrilled about relates to quantum mechanics and a mystery called the measurement problem. “I have a long-standing obsession with it,” he says.

Unlike relativity, in which time is local and relative, quantum mechanics assumes there is a universal background time. Time in quantum mechanics doesn’t have an arrow: equations work equally well forwards as backwards in time. But sometimes this reversibility can be broken. When we measure a quantum system, the act of measuring causes the system to collapse from a superposition, a mix of different possible states, into a specific outcome. This cannot be reversed, creating an arrow of time. How time manages both to have, and not have, an arrow is just one of quantum mechanics’ many puzzles. But if the thermodynamic arrow can explain our perceptual time arrow, maybe it can explain the quantum one too.

This is what Huber wants to tackle next. We know that whenever we measure something, we affect it, but the nitty-gritty of this process is often ignored in quantum mechanics. According to Huber, the act of measuring should create a flow of energy that may be best described by the laws of thermodynamics. “I think the measurement postulate is the second law in disguise,” he says. Perhaps quantum measurements, like clocks, create rising entropy and hence an emergent arrow of time.

Erker, on the other hand, points out the research could also help to test ideas that combine the notoriously clashing theories of quantum mechanics and general relativity into a quantum theory of gravity. Such tests are extremely hard. Because gravity is so weak, you either need to put massive objects in a quantum superposition state to probe gravitational effects,which is tricky and has only been done in molecules of up to 2000 atoms. Or you need to be able to make incredibly precise measurements – and quantum clocks could help with that. “If we could build clocks that are accurate on very short timescales, we could actually build tabletop quantum experiments that test for gravitational effects,” says Erker.

Any theory that explains gravity and quantum mechanics needs to describe how clocks work on the quantum scale. “All this research on understanding what clocks really are and how they kind of interact with quantum mechanics and with relativity is probably an important step to understanding how those theories fit together,” says Adlam.

Bergson and Einstein’s debate cost the physicist the Nobel prize for general relativity. The president of the Nobel committee said that, while it was complex, “it will be no secret that the famous philosopher Bergson in Paris has challenged this theory”. Instead, Einstein won for the less-glamorous photoelectric effect. But a century on, Einstein now seems the real winner of the debate. The next question is whether there will ever be a way to merge his theory of general relativity with quantum mechanics. On that, only time will tell.

Source: Link

2 comments:

  1. Our reality is an engine powered by ENERGY DISTRIBUTION. We...and the cosmos...are drenched in this energy distribution thru its PRIMARY DISTRIBUTOR we call SPACE.

    Energy is motion and every attribute, property, system, state and dimension of reality exists in a non stop turbulence of change and transition between the polarities of stable equilibrium and chaos. Complexification being one such EFFECT.

    Energy distributes across THREE SPEED RANGES: Subluminal/Luminal/ Superluminal.

    Energy distributes across these speed ranges CIRCULATIONALLY. Acceleration and Deceleration are the gas and brake pedals.

    Matter/mass are the left foot on the brake pedal as INTERFERENCE METRICS to energy distribution. Wherever and whenever any volume of energy in the luminal/superluminal speed range encounters matter/mass, it is DECELERATED down into the subluminal speed range.

    Vacuums such as the vacuum of space, being coordinates of the least amount of matter/mass ate the GAS PEDAL of energy distribution. Any volume of energy entering such coordinates immediately begins ACCELERATION back up towards SUPERLUMINALITY.

    Superluminal energy distribution is 100% momentum and 0% energy density.

    Superluminal energy distribution is DARK ENERGY, being faster than light, what else could it be?

    Superluminal energy distribution is ONE FLAT CURVED LINE DIMENSION of IMMEASURABLE ENERGY DISTRIBUTION.

    At such speed superluminal energy is moving too fast to move in any but one direction...all of it moving in the same direction and you have your DRIVER of the UNILATERAL ARROW OF TIME. Past/present/future.

    As superluminal energy decelerates down to luminal velocity its ONE FLAT CURVED LINE ( constant acceleration) DIMENSION transitions out into 360° of 3 dimensional LIBERTY of RADIATION and its DARK property transitions into TRANSPARENCY.

    Superluminal energy distribution is the most stable equilibrius speed of distribution and is therefore the BENCHMARK DRIVER of ENTROPY. The luminal/subluminal speed ranges of energy distribution are unstable disequilibrius speed ranges of chaos, turbulence and complexification.

    So energy is always DRIVEN to RE-ACCELERATE back up to its most stable speed of motion.

    Luminal and subluminal speed ranges are thermokinetically turbulent as an EFFECT of energy being DRIVEN by its BENCHMARK SPEED to restore equilibrium.

    ENTROPY, contrary to current interpretation, is not the causitive agency of dis-organization but rather the accelerational mechanism that DRIVES energy to ACCELERATE back to a HIGHER ORDER of STABILITY and EQUILIBRIUM.

    Superluminal energy distribution is moving too fast to engage in the quantum scaffolding of matter formation. Thus superluminal speed range is the domain of the missing ANTI-MATTER physics has errantly claimed as competing with normal matter for dominion over reality. Normal matter derives from anti-matter in speed differentials.

    ReplyDelete
  2. Continued from above:

    To grasp the relationship between matter and anti-matter consider SUPERLUMINAL ENERGY DISTRIBUTION...all moving in the same direction...as a wheel. Then consider the subluminal speed range of energy distribution where E= mc2 applies, as another wheel.

    Then consider these two wheels touching at LUMINAL VELOCITY where the speed differentials begin the photonic radiational fireworks.

    Now let's say superluminal energy is moving COUNTER CLOCKWISE. Then naturally, as the BENCHMARK DRIVER, it will DRIVE energy distribution in the subluminal speed range to SPIN CLOCKWISE.

    Only the luminal and subluminal speed ranges of reality had a genesis derived from that superluminal speed range of energy distribution which requires no genesis.

    When superluminal energy decelerates down to subluminal speeds it begins swapping out momentum for energy density and the INTER-MEDIATE effect is the QUANTUM SCAFFOLDING towards the formation of matter.

    Energy distribution is the professional stadium where the big boys play. Matter and mass are just SECOND STRIBG/ORDER EFFECTS of ENERGY DISTRIBUTION

    Matter/mass does not move in any direction at any speed OF ITS OWN VOLITION. Motion necessitates FORCE and FORCE is just FOCUSED ENERGY DISTRIBUTION. With FOCUS on DISTRIBUTION.

    ReplyDelete