The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime

  title={The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime},
  author={Mark Webber and Vincent Elfving and Sebastian Weidt and Winfried Karl Hensinger},
  journal={AVS Quantum Science},
We investigate how hardware specifications can impact the final run time and the required number of physical qubits to achieve a quantum advantage in the fault tolerant regime. Within a particular time frame, both the code cycle time and the number of achievable physical qubits may vary by orders of magnitude between different quantum hardware designs. We start with logical resource requirements corresponding to a quantum advantage for a particular chemistry application, simulating the FeMo-co… 

Figures from this paper

A high-fidelity quantum matter-link between ion-trap microchip modules
System scalability is fundamental for large-scale quantum computers (QCs) and is being pursued over a variety of hardware platforms [1–6]. For QCs based on trapped ions, architectures such as the


Surface codes: Towards practical large-scale quantum computation
The concept of the stabilizer, using two qubits, is introduced, and the single-qubit Hadamard, S and T operators are described, completing the set of required gates for a universal quantum computer.
A Game of Surface Codes: Large-Scale Quantum Computing with Lattice Surgery
No knowledge of quantum error correction is necessary to understand the schemes in this paper, but only the concepts of qubits and measurements, which are based on surface-code patches.
Validating quantum computers using randomized model circuits
A single-number metric, quantum volume, that can be measured using a concrete protocol on near-term quantum computers of modest size, and measured on several state-of-the-art transmon devices, finding values as high as 16.5%.
Quantum Algorithm for Spectral Measurement with a Lower Gate Count.
Two techniques are presented that can greatly reduce the number of gates required to realize an energy measurement, with application to ground state preparation in quantum simulations, and a unitary operator is proposed which can be implemented exactly, circumventing any Taylor or Trotter approximation errors.
Topological quantum memory
We analyze surface codes, the topological quantum error-correcting codes introduced by Kitaev. In these codes, qubits are arranged in a two-dimensional array on a surface of nontrivial topology, and
Fast quantum logic gates with trapped-ion qubits
This work demonstrates entanglement generation for gate times as short as 480 nanoseconds—less than a single oscillation period of an ion in the trap and eight orders of magnitude shorter than the memory coherence time measured in similar calcium-43 hyperfine qubits.
Blueprint for a Scalable Photonic Fault-Tolerant Quantum Computer
The proposed architecture enables exploiting state-of-the-art procedures for the non-deterministic generation of bosonic qubits combined with the strengths of continuous-variable quantum computation, namely the implementation of Clifford gates using easy-to-generate squeezed states.
A silicon-based surface code quantum computer
A simple ‘orbital probe’ architecture overcomes many of the difficulties facing solid-state quantum computing, while minimising the complexity and offering qubit densities that are several orders of magnitude greater than other systems.
Hierarchical surface code for network quantum computing with modules of arbitrary size
A hierarchical generalization of the surface code is introduced: a small ``patch'' of the code exists within each module and constitutes a single effective qubit of the logic-level surface code in order to optimize fault tolerance in such architectures.
Quantum supremacy using a programmable superconducting processor
Quantum supremacy is demonstrated using a programmable superconducting processor known as Sycamore, taking approximately 200 seconds to sample one instance of a quantum circuit a million times, which would take a state-of-the-art supercomputer around ten thousand years to compute.