Visit Science News to read breaking news about the latest discoveries in science, health, the environment, technology and more.

Please Join Us On Facebook and Twitter

Showing posts with label Computing. Show all posts
Showing posts with label Computing. Show all posts

Monday, 9 December 2019

The era of printed electronics is beginning

Large scale integrated circuit (LSI) prototypes straight out of the printer. [Image: Thor Balkhed]

Printed electronics

Swedish researchers say they have taken the missing step to bring electronic circuit printing from the laboratory to the factories, making it possible to apply organic electronics on a large scale.

Please support by Sharing the article and also by visiting the ads in the post, your little click can help us to keep posting the beneficial Stuff, please leave a comment if you have any suggestions:
And Please follow us on Twitter  and Facebook
Thank you 😊

The decisive step was the integration between the new field of printed electronics and traditional silicon-based electronics manufactured by traditional mask and lithography techniques.

"This is a decisive step for a technology that was born at Linkoping University just over 17 years ago," said Professor Magnus Berggren.



"The advantage we have here is that we don't have to mix different manufacturing methods: Everything is done by screen printing and in relatively few processing steps. The key is to make sure the different layers finish in exactly the right place," added his colleague Peter Ersman.

Printing electronic circuits

Printing fully functional electronic circuits - they can be printed on flexible, transparent plastics or virtually any other material - has required a number of innovations over the past 17 years.

A first step was the creation of screen-printing screens that let you print extremely thin lines so that semiconductor inks can form components with precision and high density per area.

At least three additional challenges have since been faced: Reduce circuit size, increase quality so that the probability of all transistors in the circuit working is as close as possible to 100%, and - not least - integrating with the silicon-based circuits needed to process signals and communicate with the environment.

"One of the major advances is that we have been able to use printed circuits to interface with traditional silicon-based electronics. We have developed various types of printed circuits based on organic electrochemical transistors. One of them is the shift register, which can interface and handle contact between the silicon-based circuit and other electronic components such as sensors and displays. This means that we can now use a silicon chip with fewer contacts, which requires a smaller area and thus is much cheaper. , "said Berggren.

The internet of things will be the first major beneficiary of print electronics.

IoT and screens

The development of semiconductor inks was another decisive element for the miniaturization process and also for higher quality. "We can now place more than 1,000 organic electrochemical transistors on an A4 size plastic substrate and connect them in different ways to create different types of printed integrated circuits," said team member Professor Simone Fabiano.

These large-scale integrated circuits, or LSIs, can be used, for example, to power electrochromic screens themselves manufactured as printed electronics.

The big expectation, however, is that printed electronics will give the final push to make the low cost, low power circuits required by the internet of things.




Bibliography:

Article: All-Printed Large-Scale Integrated Circuits Based on Organic Electrochemical Transistors
Authors: Peter Andersson Ersman, Roman Lassnig, Jan Strandberg, Deyu You, Vahid Keshmiri, Robert Forchheimer, Simone Fabiano, Goran Gustafsson, Magnus Berggren
Journal: Nature Communications
Vol .: 10, Article number: 5053
DOI: 10.1038 / s41467-019-13079-4

Saturday, 7 December 2019

Quantum light processors are demonstrated in practice

Interlaced 3D light beams allow for quantum operations at room temperature and macro scale

Optical quantum processor


Two international teams, working separately, built prototypes of quantum processors made of light.

Qubits formed by intertwining laser beams are expected to make quantum computers less error prone and allow scalability, that is, scaling up processors to a large number of qubits.

Please support by Sharing the article and also by visiting the ads in the post, your little click can help us to keep posting the beneficial Stuff, please leave a comment if you have any suggestions:
And Please follow us on Twitter  and Facebook
Thank you 😊

"While today's quantum processors are impressive, it's unclear whether today's designs can scale to extremely large sizes. Our approach starts with extreme scalability - built in from the start - because the processor, called a cluster state, is made of light. , "said Professor Nicolas Menicucci of RMIT University in Australia and leader of one of the teams.

A cluster state is a large collection of intertwined quantum components that perform quantum calculations when measured in a specific way - all operating at macroscopic scale using normal photonic components.



Both teams met the two fundamental requirements for cluster state operation, which comprise a minimum amount of qubits and quantum entanglement in the proper structure for their use in computational calculations.

To this end, specially designed crystals convert common laser light into a type of quantum light called compressed light , which is woven into a cluster state by a network of mirrors, light splitters, and optical fibers.

While the light compression levels achieved so far - which are a measure of photonic processor quality - are too low to solve practical problems, the design is compatible with approaches to achieving next-generation compression levels.

"Our experiment demonstrates that this design is workable - and scalable," said Professor Hidehiro Yonezawa of the University of New South Wales.

Animation showing the temporal evolution of the cluster state generation scheme

Quantum processor at room temperature


Mikkel Larsen and his colleagues at the Technical University of Denmark prefer to call his optical quantum processor prototype a "light carpet."

This is because, instead of the threads of an ordinary carpet, the processor is in fact a carefully crafted web of thousands of intertwined pulses of light.

"Unlike traditional cluster states, we use the temporal degree of freedom to achieve a two-dimensional interlaced network of 30,000 light pulses. The experimental setup is really surprisingly simple. Most of the effort has gone into developing the idea of ​​state generation. cluster, "said Larsen.

The Danish team has also been able to make its light carpet handle quantum entanglement at room temperature, noting that, in addition to error correction and simplification of technology, quantum optical processors can be cheaper and more powerful as they will allow the rapid increase in the number of qubits.

An optical quantum computer, therefore, does not require the expensive and complicated cooling technology used by superconducting qubits. At the same time, light-based qubits, which carry information in laser light, hold the information longer and can transmit it over long distances.



"By distributing the state of the cluster generated in space and time, an optical quantum computer can also scale more easily to contain hundreds of qubits. This makes it a potential candidate for the next generation of larger and more powerful quantum computers," reinforced Professor Ulrik Andersen.


Bibliography:

Article: Generation of time-domain-multiplexed two-dimensional cluster state
Authors: Warit Asavanant, Yu Shiozawa, Shota Yokoyama, Baramee Charoensombutamon, Hiroki Emura, Rafael N. Alexander, Shuntaro Takeda, Jun-Ichi Yoshikawa, Nicolas C. Menicucci , Hidehiro Yonezawa, Akira Furusawa
Magazine: Science
Vol. 373-376
DOI: 10.1126 / science.aay2645

Article: Deterministic generation of a two-dimensional cluster state
Authors: Mikkel V. Larsen, Xueshi Guo, Casper R. Breum, Jonas S. Neergaard-Nielsen, Ulrik L. Andersen
Journal: Science
Vol. 366, Issue 6463, p. 369-372
DOI: 10.1126 / science.aay4354

Saturday, 23 November 2019

What can make artificial intelligence really intelligent?


Automated stupidity

Despite the many concerns about artificial intelligence and its growing role in society, the fact is that today's generation of artificial intelligence programs is not at all intelligent .

There are basically two types of machine learning: deep neural networks, those responsible for the famous "deep learning", and reinforcement learning networks. Both are based on system training, using huge amounts of data, to perform a specific task, for example making a decision.

Please support by Sharing the article and also by visiting the ads in the post, your little click can help us to keep posting the beneficial Stuff, please leave a comment if you have any suggestions:
And Please follow us on Twitter 
Thank you 😊

During training, the desired result is provided along with the task. Over time, the program learns to solve the task with ever faster accuracy, although no one understands exactly how the program works - it's the so-called "black box" of artificial intelligence .

"The problem with these machine learning processes is that they are basically completely dumb," says Professor Laurenz Wiskott of Ruhr University in Germany. "The underlying techniques date back to the 1980s. The only reason for today's success is that we have more computing power and more data at our disposal today."


But Professor Wiskott's team is trying to eliminate the stupidity of artificial intelligence and make it really smart.

Unsupervised artificial intelligence

Today artificial intelligence can be superior to humans specifically in the one task for which each program has been trained - it cannot generalize or transfer its knowledge even to similar tasks.

"What we want to know is, how can we avoid all this absurd and long training? And most of all: how can we make machine learning more flexible?" said Wiskott.

The strategy is to help machines autonomously discover structures in data. Tasks can include, for example, category formation or detection of gradual changes in videos. The idea is that this unsupervised learning allows computers to autonomously explore the world and perform tasks for which they have not been trained in detail.

"A task could be, for example, forming clusters," explains Wiskott. To do this, the computer is instructed to group similar data in search, for example, of a face in a photo. Turning the pixels into points in a three-dimensional space means grouping points whose coordinates are close to each other. If the distance between coordinates is greater, they will be allocated to different groups. This dispenses with the enormity of photos and their descriptions as used today.

This method offers more flexibility because this cluster formation is applicable not only to pictures of people, but also to cars, plants, houses or other objects.

Slow Principle

Another approach taken by the team is the slowness principle. In this case, it is not the photos that constitute the input signal, but moving images: If all the very slowly changing features are extracted from a video, structures appear that help construct an abstract representation of the environment. "Here, too, the goal is to pre-structure the input data," says Wiskott.

Eventually, researchers combine the two approaches in a modular way with supervised learning methods to create more flexible yet much more accurate applications.

"Greater flexibility naturally results in loss of performance," admits the researcher. "But in the long run, flexibility is indispensable if we want to develop robots that can handle new situations."

Please support by Sharing the article and also by visiting the ads in the post, your little click can help us to keep posting the beneficial Stuff, please leave a comment if you have any suggestions:
And Please follow us on Twitter 
Thank you 😊

Tuesday, 25 June 2019

New form of computing with light does not waste energy

The material inside the cube forms complex patterns that give the answer to the calculation directly. [Image: Hudson et al. - 10.1038 / s41467-019-10166-4]

Computation with light

Canadian researchers have developed an unprecedented and incredibly simple form of computing.

The entrances are provided through standard beams of light and shade, known as bands or fringes, which are fired across different facets of a hub containing a plastic material.

To know the result of the calculation just read the combined light fringes that emerge on the other side of the cube.

So far, the team has been able to use its new optical computing process to perform simple addition and subtraction operations.

The computing is highly localized, does not need energy source and operates completely within the spectrum of visible light.


The material in the cube reads and reacts "intuitively" to light, much like a plant does when it turns to the sun or as an octopus changes the color of its skin to adapt to the environment.

"We are very excited to be able to do addition and subtraction in this way, and we are thinking of ways to do other computational functions," said Professor Kalaichelvi Saravanamuttu of McMaster University. "These are autonomous materials that respond to stimuli and perform intelligent operations.

Visualization of the fringes of light emerging from the various faces of the cube. [Image: Hudson et al. - 10.1038 / s41467-019-10166-4]

Smart Objects

The technique, inspired by the natural biological systems it recalls, represents a completely new form of computation, which, according to the team, has the potential to perform complex and useful functions, and even others to be imagined, possibly organized along structures of neural networks.

The technology is based on a branch of chemistry called nonlinear dynamics, and uses materials designed and manufactured to produce specific reactions to light - a class of artificial materials known as metamaterials .

The amber polymeric artificial material is encapsulated within a glass cube about the size of a die used in a board game. The polymer begins as a liquid and turns into a gel in reaction to light.

The beam of light passes through the hub, exiting the opposite face toward a camera, which reads the results. The results are produced as light is refracted by the material inside the cube, whose components spontaneously form in thousands of filaments that react to light patterns to produce a new dimensional pattern that expresses the result.

As computation is embedded in the material, this optical computing technique will not replace current computers, but it can yield intelligent objects that give instant solutions to specific problems.

"We do not want to compete with existing computing technologies. We are trying to build materials with smarter and more sophisticated responses," said Fariha Mahmood, a co-author of the paper.



Bibliography:

A soft photopolymer cuboid that computes with binary strings of white light
Alexander D. Hudson, Matthew R. Ponte, Fariha Mahmood, Thomas Pena Ventura, Kalaichelvi Saravanamuttu
Nature Communications Vol. 10, Article number: 2310 
DOI: 10.1038 / s41467- 019-10166-4

Monday, 24 June 2019

A first step towards affordable consumer quantum computers



Researchers at the University of Tsukuba investigate a new method for generating coherent signals in silicon chips using laser-induced vibrations which may greatly accelerate the development of new quantum computers with superior performance

A team at the University of Tsukuba studied a novel process for creating coherent lattice waves inside silicon crystals using ultrashort laser pulses. Using theoretical calculations combined with experimental results that were obtained at the University of Pittsburgh, they were able to show that coherent vibrational signals could be maintained inside the samples. This research may lead to quantum computers based on existing silicon devices that can rapidly perform tasks out of the reach of even the fastest supercomputers now available.


From home PCs to business servers, computers are a central part of our everyday life, and their power continues to grow at an astounding rate. However, there are two big problems looming on the horizon for classical computers. The first is a fundamental limit on how many transistors we can pack into a single processor. Eventually, a totally new approach will be needed if we are to continue to increase their processing capacity. The second is that even the most powerful computers struggle with certain important problems, such as the cryptographic algorithms that keep your credit card number safe on the internet, or the optimization of routes for delivering packages.

The solution to both problems may be quantum computers, which take advantage of the rules of physics that govern very small length scales, as with atoms and electrons. In the quantum regime, electrons act more like waves than billiard balls, with positions that are “smeared-out” rather than definite. In addition, various components can become entangled, such that the properties of each one cannot be completely described without reference to the other. An effective quantum computer must maintain the coherence of these entangled states long enough to perform calculations.

In the current research, a team at the University of Tsukuba and Hrvoje Petek, RK Mellon Chair of Physics and Astronomy at the University of Pittsburgh used very short laser pulses to excite electrons inside a silicon crystal. “The use of existing silicon for quantum computing will make the transition to quantum computers much easier,” first author Dr. Yohei Watanabe explains. The energetic electrons created coherent vibrations of the silicon structure, such that the motions of the electron and the silicon atoms became entangled. The state of the system was then probed after a variable delay time with a second laser pulse.

Based on their theoretical model, the scientists were able to explain oscillations observed in the charge generated as a function of delay time. “This experiment reveals the underlying quantum mechanical effects governing the coherent vibrations,” says senior author Prof. Muneaki Hase, who performed the experiments. “In this way, the project represents a first step towards affordable consumer quantum computers.”




Bibliography:

Ultrafast asymmetric Rosen-Zener-like coherent phonon responses observed in silicon.
Yohei Watanabe, Ken-ichi Hino, Nobuya Maeshima, Hrvoje Petek, Muneaki Hase. 
Physical Review B, 2019; 99 (17)
DOI: 10.1103/PhysRevB.99.174304

Friday, 21 June 2019

Ultra-secure virtual money guarantees transactions across the galaxy

The quantum teleportation longer works experimentally with 100% reliability [Image: Hanson Lab / TUDelft].

Universal money

A new kind of money, more versatile and safer than the current crypto-coins, promises to give users the ability to make informed decisions that come at different times and places around the Universe, while still keeping savings even against attacks from future quantum computers - ours or from ETs.

The theoretical framework, dubbed "S-money," ensures completely uninterrupted and secure authentication and gives quicker, instantaneous responses, and more flexibility than any existing financial technology thanks to the combination of the power of quantum theory and relativity - speed of communication can create what Kent calls "relativistic economics."

In fact, the theory allows for commercial and financial transactions through the Solar System and beyond, without any time delays.


"It's a slightly different way of thinking about money: instead of something we have in our hands or in our bank accounts, money can be considered something that you need to dispose of at a certain point in space and time, in response to data which are coming from many other points in space and time, "said Professor Adrian Kent of the University of Cambridge, UK, who proposed the idea.

But trading on a galactic scale will have to wait for demand, with the first Earth-scale tests expected to begin later this year. Although "S-money" requires very fast calculations, Kent believes it may be feasible with today's computing technology.

Ultra-secure virtual money guarantees transactions across the galaxy


Competition with quantum money can intensify as quantum hard drives improve , which already hold data for several hours. [Image: Solid State Spectroscopy Group / ANU]

Quantum money

The structure developed by Professor Kent can be considered as secure tokens generated by communications between various points in a financial network that respond flexibly to real-time data around the world and "materialize" so that they can be used at the place and time the owner of the money requires. This allows users to respond to events faster than familiar types of money, both physical and digital, that follow space-defined paths, requiring a "shifting" time.

Virtual symbols can be negotiated securely, without waiting for cross-checking or verification along the network, eliminating any risk of double trading. This is done by taking advantage of quantum theory, more specifically quantum entanglement , that strange phenomenon which Einstein called " phantasmagoric action at a distance " and which allows two intertwined particles to influence each other instantaneously, whatever the distance the separate



User privacy is maintained by protocols such as bit compromise, which is a mathematical version of a securely sealed envelope. The data is delivered from part A to part B in a locked state that can not be changed after it has been sent and can only be revealed when part A provides the key - safely guaranteed, even if either party tries to strike .

Other researchers have already developed theoretical frameworks, known as "quantum money", which are also based on the strange behavior of particles on the subatomic scale. While quantum money for real-world transactions may someday be possible, according to Kent, at the moment it is technologically impossible to keep quantum money safe for any appreciable amount of time, something that does not happen to your "S-money."

Now it is to wait for the result of the first tests. "We are trying to understand the practicalities and understand the advantages and disadvantages [of S-money]," said Kent, who patented the idea and intends to become the first pan-galactic banker.




Bibliography:

 S-money: virtual tokens for a relativistic economy
Adrian Kent 
Proceedings of the Royal Society A
 DOI: 10.1098 / rspa.2019.0170

Tuesday, 18 June 2019

Artificial Intelligence defeats humans in first person games

Doubles of humans captured an average of 16 flags less than the doubles of bots of artificial intelligence. [Image: DeepMind]

Researchers at Google's DeepMind have created virtual players, or bots , who learned alone - without any prior instruction - to play two multiplayer 3D games in the first person.

These software agents have achieved human-like abilities, not only by playing alone, but also by cooperating to achieve a common goal.

This is a significant improvement over previous achievements of the company, which involved defeating human chess players and Go.

Artificial intelligence agents , who have learned from a machine-learning technique known as "reinforcement learning," demonstrate an uncommon ability to develop and use high-level strategies, learned independently by themselves, to compete and cooperate in the game environment.

Reinforcement learning

Reinforcement learning, a method used to train artificially intelligent agents, had already shown its potential by generating intelligent virtual players capable of navigating increasingly complex single player environments such as chess and Go.

However, the ability to compete with multiple players simultaneously, particularly games that involve teamwork and interaction among several independent players, has never been demonstrated precisely because it is something of a much higher level of complexity.

Max Jaderberg and his colleagues demonstrated the potential of their artificial intelligence bot in first-person matches in the Quake III Arena and Capture the Flag games .

In contrast to previous demonstrations in which artificial intelligence agents received prior "knowledge" about the game environment or the status of other players, this new approach ensured that each software agent learned independently from their own experience using only the that the program itself could "see" - the pixels of the screen and the score of the game.

Such a software system that is embedded in a robot would also be fed information in the same way, since the cameras provide just pixels.

The bots only learned by analyzing the pixels of the screens and the score of the game. [Image: Jaderberg et al. - 10.1126 / science.aau6249]

Defeating or cooperating with humans

Placed against each other, a population of artificial intelligence agents learned to play engaging in thousands of matches in randomly generated environments. According to the researchers, over time agents have independently developed surprisingly high-level strategies, similar to those used by skilled human players in both games.

Moreover, in matches against human players, the agents outperformed human adversaries, even when the reaction times of the agents were reduced to human levels.

More than that, the agents formed teams with both other agents and other human players, and cooperated with both towards the goal of winning the match.

Programs capable of learning alone raise a host of concerns. [Image: Jaderberg et al. - 10.1126 / science.aau6249]

Technology for good and evil

The feat is remarkable and should be celebrated because one can think of innumerable possibilities of using this technology for the good of mankind.

However, there have also been warnings about the need to set boundaries for artificial intelligence , to at least try to direct the technology to an artificial intelligence of the good .

As shows begin to show up in competing situations and, potentially, in-game violence, it may be time to start considering technological advances in the area from a broader perspective.

Among the fears are the creation of murderous robots and the loss of human control over artificial intelligence , since soon computer programs will be programmed by themselves .



Bibliography:

Human-level performance in 3D multiplayer games with population-based reinforcement learning
Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castañeda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, Thore Graepel Science
Vol .: 364 Issue 6443 859
DOI: 10.1126 / science.aau6249

Create a local network without even needing a power grid or Internet

The small device creates a totally wireless network - with or without internet. [Image: UNIGE]

On-demand network

It can be to act in areas of environmental disaster or epidemics, disseminate technologies or make educational projects feasible in remote areas or do field research - there are numerous examples where it becomes necessary to establish a communications network without having an infrastructure.

He was thinking about cases as soon as a team from the University of Geneva, Switzerland, created the Beekee Box .

The device is able to establish a wireless local access network without even relying on an electric power grid - everything runs on batteries.

First-aid groups, teachers, doctors, technicians, and of course the general population, can connect to the wireless network using their cell phones or computers.

"It's as if users are browsing an educational platform on the web, but without the need for the internet or the power grid. Anyone who is connected can follow complete training programs, conduct assessments, access documents, or interact with their peers in time real, "said Professor Vincent Widmer, who is responsible for the development of the Beekee Box .

Network for training

The Beekee Box is made of plastic, measuring 10 cm in height and 6.5 cm in width. Inside there is a complete microcomputer, plus the battery and an SSD disk with the capacity to store up to 256 GB of data.

The network's autonomy is about 3 hours, but an external battery module, which is recharged using solar energy, allows operations for up to 10 continuous hours.

The box itself is also simple to use: teachers export from your computer all the educational material they need directly to the device, and then take the box with you to anywhere in the world. Users only need to connect their mobile devices to the local network generated by the Beekee Box to access and transmit content.

"Teachers can restrict their interactions with students only to the network of the box, without the data being shared over the internet. Everything remains confidential and compartmentalized at the Beekee Box , which is a great asset in terms of protecting personal data," said researcher Stephane Morand.


The team is now tweaking the programs, which will be available on the open source system. And the field tests have already begun.

"We are currently working with Doctors Without Borders to help physicians deliver crisis management training and send recent medical guides. Our goal is to make educational, technological, and research resources available to analyze context and provide tailored solutions, Widmer said.

Source

Saturday, 15 June 2019

Quantum Teleportation Moves Entire Logic Operation


Teleportation

The quantum transporter transfers data of a quantum system (such as an ion) to another (one second ion), even though the two are completely isolated from each other.

In this form of real-life teleportation, only quantum information is carried, not matter - unlike the Star Trek version of "radiating" entire human beings from a spacecraft to a planet.

Quantum data teleportation has previously been demonstrated with ions and a variety of other systems, including a fiber-optic teleportation 6 km away .

Now physicists have been able to teleport not just a die, but a complete logical operation between two separate ions (electrically charged atoms), showing how future quantum computer programs can perform tasks on large-scale networks.

"We found that our logic operation works in all quantum-bit input states with 85-87% probability - far from perfect, but it's a start," said Professor Dietrich Leibfried of the National Institute of Standards and Technology NIST).

The work was attended by Professor Hilma Vasconcelos, Federal University of Ceará.

Teleportation of logical operations 

For quantum computers to perform as expected, they are likely to need millions of quantum bits, or qubits , and ways of conducting operations between qubits distributed across machines and large-scale networks.

The teleportation of logical operations is one way of doing this without direct connections of quantum nature - physical connections for the exchange of classic information will probably still be necessary.

The team teleported an "NOT Controlled" (CNOT) logical operation between two qubits of beryllium ions located more than 340 micrometers apart, a distance that excludes any substantial direct interaction.

A logic operation CNOT inverts the second qubit from 0 to 1, or vice versa, only if the first qubit is 1; nothing happens if the first qubit is 0. In a typical quantum mode, the two qubits may be in "superposition", in which they have values ​​of 1 and 0 at the same time.

But the teleportation process depends on another quantum phenomenon, the interlacing, which "connects" the properties of the particles even when they are separated. A messenger pair of interlaced magnesium ions is used to transfer information between beryllium ions. 

Quantum truth table 

The teletransported CNOT process intertwined the two magnesium ions - an essential prior step - with a 95% success rate, while teleportation of the full logical operation succeeded 85% to 87% of the attempts.

To verify that the CNOT port was still functioning after being teleported, the researchers prepared the first qubit in 16 different combinations of input states and measured the outputs in the second qubit. This produced a generalized quantum truth table, showing that the process works.

This technique should become an important tool in the characterization of quantum information processes in future experiments. 


In this paper, we describe
the quantum gate teleportation between separated qubits in a trapped-ion processor , as well as in the case of Yong Wan, Daniel Kienzler, Stephen D. Erickson, Karl H. Mayer, Ting Rei Tan, Jenny J. Wu, Hilma M. Vasconcelos, Scott Glancy, Emanuel Knill, David J. Wineland, Andrew C. Wilson, Dietrich Leibfried Science Vol. 364, Issue 6443, pp. 875-878 DOI: 10.1126 / science.aaw9415