Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Machine Learning

Machine Learning Blazes Path To Reliable Near-term Quantum Computers

  • February 22, 2021
  • relay
Noise-aware circuit learning uses machine learning to formulate a circuit, or algorithm, with the best strategy to run a specific task in the most reliable way on a given quantum computer.

Using machine learning to develop algorithms that compensate for the crippling noise endemic on today’s quantum computers offers a way to maximize their power for reliably performing actual tasks, according to a new paper.

“The method, called noise-aware circuit learning, or NACL, will play an important role in the quest for quantum advantage, when a quantum computer solves a problem that’s impossible on a classical computer,” said Patrick Coles, a quantum physicist in at Los Alamos National Laboratory and lead author on the paper, “Machine learning of noise-resilient quantum circuits,” published today in Physical Review X Quantum.

“Our work automates designing quantum computing algorithms and comes up with the fastest algorithm tailored to the imperfections of a specific hardware platform and a specific task,” said Lukasz Cincio, a quantum physicist at Los Alamos. “This will be a crucial tool for using real quantum computers in the near term for work such as simulating a biological molecule or physics simulations relevant to the national security mission at Los Alamos.”

Coles likened the machine-learning approach to a vaccine that strengthens a person’s resistance to a virus by training their immune system in the presence of a piece of that pathogen. Similarly, the machine learning trains quantum circuits in the presence of a specific quantum computer’s noise processes. The resulting circuit, or algorithm, is resistant to that noise, which is the biggest problem facing today’s noisy intermediate-scale quantum computers.

NACL starts with two things: a description of a computational task and a model of the noise on the quantum computer that will perform the task. Then the machine learning program formulates a circuit with the best strategy to run the task in the most reliable way on that particular computer, based on its unique noise profile.

Read More  Google I/O 2019 | Cloud TPU Pods: AI Supercomputing for Large Machine Learning Problems

The framework is practical, too. It works for all of the common tasks in quantum computing — extracting observables, preparing quantum states, and compiling circuits. The Los Alamos–led team tested sample problems in each of these areas and demonstrated that NACL reduces error rates in algorithms run on quantum computers by factors of 2 to 3 compared to textbook circuits for the same tasks.

 

Noise leads to errors

Errors are caused by disruptive noise in the form of various kinds of interactions between the quantum bits, or qubits, and the surrounding environment. Those interactions cause the qubits to lose their “quantumness” in a process called decoherence, which occurs within a millionth of a second.

Quantum bits are the fundamental processing unit of a quantum computer. Bits on a classical computer can only have a value of 0 or 1—that’s the basis of all computing on your phone or laptop. Qubits, on the other hand, can have a value of 0, 1, or various “superpositions” that result in probabilities between 0 and 1. That quality gives quantum computers their potential for supreme processing power.

Previous machine-learning attempts sought to reduce the errors by shortening the circuits and reducing the number of logic gates, but did not profile the errors in particular hardware platforms. Gates are the part of a circuit that act on the qubits as part of an algorithm. Previous machine-learning codes did not train to recognize and compensate for noise.

 

Letting the computer do the work

“In this new research, we let the computer discover what’s best,” Coles explained. “In essence, we say, ‘Computer, please find the best strategy for making a resilient circuit.’ We found the computer discovers strategies that make sense to us.”

Read More  Los Alamos, Hewlett Packard Enterprise And NVIDIA Partner To Speed Up Scientific Computing

It turns out the shortest circuit isn’t always the best. Every gate is imperfect, so sometimes it’s better to add gates that correct errors on the fly.

For instance, if a particular computer erroneously over-rotates one individual qubit, the machine learning might surround it with other gates to correct errors from original gate. That’s a well-known strategy called dynamically corrected gates, but it emerges spontaneously out of the NACL optimization procedure.

Another common error-correction strategy in quantum computing is called drift, or the do-nothing gate—a qubit is left undisturbed by the algorithm, and its quantum state drifts, like a boat on a lake. If its state is a certain electron spin, for example, the earth’s magnetic field might cause a tiny alteration in that spin. But NACL rarely chooses to let a qubit sit and do nothing—the machine learning wants a gate to do something.

 

Classical training, quantum results

Coles said the team’s theoretical work involved developing a noise model of the quantum computer of interest, putting that model on a classical desktop computer, then training the machine learning on that model. After training, the machine learning output circuit, or algorithm, adapted to that particular quantum computer’s noise model.

The team then transferred the resulting algorithm to the quantum computer and evaluated its outcomes on target problems. The evaluation is based on how closely the observed output matched standard ways of measuring that output for a known problem

NACL brings a few advantages compared to other methods of compiling circuits for qubits. For instance, NACL can automatically derive known noise suppression concepts and apply them where they are useful. It also incorporates common-sense strategies such as minimizing the number of noisy idle gates and maximizing the use of ideal gates.

Read More  PyCon 2019 | Machine Learning Model And Dataset Versioning Practices

“For the future, it will be important to figure out how to scale NACL to develop noise-resilient circuits for larger devices,” Coles said.

Co-authors of the paper are Lukasz Cincio, also of Los Alamos, and Kenneth Rudinger and Mohan Sarovar, of Sandia National Laboratories.

The paper: “Machine learning of noise-resilient quantum circuits,” Lukasz Cincio, Kenneth Rudinger, Mohan Sarovar, and Patrick J. Coles, Physical Review X Quantum, Feb. 16.

The funding: Funding was provided by Los Alamos National Laboratory’s Laboratory Directed Research and Development (LDRD) program.

About Los Alamos National Laboratory

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is managed by Triad, a public service oriented, national security science organization equally owned by its three founding members: Battelle Memorial Institute (Battelle), the Texas A&M University System (TAMUS), and the Regents of the University of California (UC) for the Department of Energy’s National Nuclear Security Administration.

Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

relay

Related Topics
  • LANL
  • Los Alamos National Laboratory
  • NACL
  • Quantum
You May Also Like
View Post
  • Artificial Intelligence
  • Data
  • Data Science
  • Machine Learning
  • Technology

Google Data Cloud & AI Summit : In Less Than 12 Hours From Now

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Technology

ChatGPT 4.0 Finally Gets A Joke

  • March 27, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Technology

Mr. Cooper Is Improving The Home-buyer Experience With AI And ML

  • March 24, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Technology

GPT-4 : The Latest Milestone From OpenAI

  • March 24, 2023
View Post
  • Engineering
  • Machine Learning

Peacock: Tackling ML Challenges By Accelerating Skills

  • March 23, 2023
View Post
  • Data
  • Machine Learning
  • Platforms

Coop Reduces Food Waste By Forecasting With Google’s AI And Data Cloud

  • March 23, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Robotics

Gods In The Machine? The Rise Of Artificial Intelligence May Result In New Religions

  • March 23, 2023
View Post
  • Artificial Intelligence
  • Machine Learning

6 ways Google AI Is Helping You Sleep Better

  • March 21, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Bard And ChatGPT — A Head To Head Comparison
    • March 31, 2023
  • 2
    Modernize Your Apps And Accelerate Business Growth With AI
    • March 31, 2023
  • 3
    Why Your Open Source Project Needs A Content Strategy
    • March 31, 2023
  • 4
    From Raw Data To Actionable Insights: The Power Of Data Aggregation
    • March 30, 2023
  • 5
    Effective Strategies To Closing The Data-Value Gap
    • March 30, 2023
  • 6
    Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts
    • March 29, 2023
  • 7
    Try Bard And Share Your Feedback
    • March 29, 2023
  • 8
    Google Data Cloud & AI Summit : In Less Than 12 Hours From Now
    • March 29, 2023
  • 9
    Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles
    • March 28, 2023
  • 10
    Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing
    • March 28, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    Introducing GPT-4 in Azure OpenAI Service
    • March 21, 2023
  • 2
    How AI Can Improve Digital Security
    • March 27, 2023
  • 3
    ChatGPT 4.0 Finally Gets A Joke
    • March 27, 2023
  • 4
    Mr. Cooper Is Improving The Home-buyer Experience With AI And ML
    • March 24, 2023
  • 5
    My First Pull Request At Age 14
    • March 24, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.