0000000000420097

AUTHOR

Yongcheng Ding

0000-0002-6008-0001

Retrieving Quantum Information with Active Learning

Active learning is a machine learning method aiming at optimal design for model training. At variance with supervised learning, which labels all samples, active learning provides an improved model by labeling samples with maximal uncertainty according to the estimation model. Here, we propose the use of active learning for efficient quantum information retrieval, which is a crucial task in the design of quantum experiments. Meanwhile, when dealing with large data output, we employ active learning for the sake of classification with minimal cost in fidelity loss. Indeed, labeling only 5% samples, we achieve almost 90% rate estimation. The introduction of active learning methods in the data a…

research product

Toward Prediction of Financial Crashes with a D-Wave Quantum Annealer

The prediction of financial crashes in a complex financial network is known to be an NP-hard problem, which means that no known algorithm can efficiently find optimal solutions. We experimentally explore a novel approach to this problem by using a D-Wave quantum annealer, benchmarking its performance for attaining a financial equilibrium. To be specific, the equilibrium condition of a nonlinear financial model is embedded into a higher-order unconstrained binary optimization (HUBO) problem, which is then transformed into a spin-1/2 Hamiltonian with at most, two-qubit interactions. The problem is thus equivalent to finding the ground state of an interacting spin Hamiltonian, which can be app…

research product

Experimentally Realizing Efficient Quantum Control with Reinforcement Learning

Robust and high-precision quantum control is crucial but challenging for scalable quantum computation and quantum information processing. Traditional adiabatic control suffers severe limitations on gate performance imposed by environmentally induced noise because of a quantum system's limited coherence time. In this work, we experimentally demonstrate an alternative approach {to quantum control} based on deep reinforcement learning (DRL) on a trapped $^{171}\mathrm{Yb}^{+}$ ion. In particular, we find that DRL leads to fast and robust {digital quantum operations with running time bounded by shortcuts to adiabaticity} (STA). Besides, we demonstrate that DRL's robustness against both Rabi and…

research product

Breaking adiabatic quantum control with deep learning

In the era of digital quantum computing, optimal digitized pulses are requisite for efficient quantum control. This goal is translated into dynamic programming, in which a deep reinforcement learning (DRL) agent is gifted. As a reference, shortcuts to adiabaticity (STA) provide analytical approaches to adiabatic speed up by pulse control. Here, we select single-component control of qubits, resembling the ubiquitous two-level Landau-Zener problem for gate operation. We aim at obtaining fast and robust digital pulses by combining STA and DRL algorithm. In particular, we find that DRL leads to robust digital quantum control with operation time bounded by quantum speed limits dictated by STA. I…

research product