• Corpus ID: 240070921

Subtleties in the trainability of quantum machine learning models

  title={Subtleties in the trainability of quantum machine learning models},
  author={Supanut Thanasilp and Samson Wang and Nhat A. Nghiem and Patrick J. Coles and Mar{\'i}a Cerezo},
A new paradigm for data science has emerged, with quantum data, quantum models, and quantum computational devices. This field, called Quantum Machine Learning (QML), aims to achieve a speedup over traditional machine learning for data analysis. However, its success usually hinges on efficiently training the parameters in quantum neural networks, and the field of QML is still lacking theoretical scaling results for their trainability. Some trainability results have been proven for a closely related… 

Figures and Tables from this paper

Exponential concentration and untrainability in quantum kernel methods

This work shows that, under certain conditions, values of quantum kernels over different input data can be exponentially concentrated towards some value, leading to an exponential scaling of the number of measurements required for successful training.

Theoretical Guarantees for Permutation-Equivariant Quantum Neural Networks

This work provides the first theoretical guarantees for equivariant QNNs, thus indicating the extreme power and potential of GQML.

Challenges and opportunities in quantum machine learning

Current methods and applications for quantum machine learning are reviewed, including differences between quantum and classical machine learning, with a focus on quantum neural networks and quantum deep learning.

Optimisation-free Classification and Density Estimation with Quantum Circuits

A variational quantum circuit approach that could leverage quantum advantage for the implementation of a novel machine learning framework for probability density estimation and classification using quantum circuits is discussed.

Fock state-enhanced expressivity of quantum machine learning models

A photonic-based bosonic data-encoding scheme that embeds classical data points using fewer encoding layers and circumventing the need for nonlinear optical components by mapping the data points into the high-dimensional Fock space is proposed.

Exponential data encoding for quantum supervised learning

It is numerically demonstrated that even exponential-data-encoding circuits with single-layer training modules can generally express functions that lie outside the classically-expressible region, thereby supporting the practical benefits of such a resource advantage.

Quantum Mixed State Compiling

This work presents a variational quantum algorithm (VQA) to learn mixed states which is suitable for near-term hardware and investigates the e-cacy of the algorithm through extensive numerical implementations, showing that typical random states and thermal states of many body systems may be learnt this way.

Diagnosing Barren Plateaus with Tools from Quantum Optimal Control

Variational Quantum Algorithms (VQAs) have received considerable attention due to their potential for achieving near-term quantum advantage. However, more work is needed to understand their

Generalization in quantum machine learning from few training data

This work provides a comprehensive study of generalization performance in QML after training on a limited number N of training data points, and reports rigorous bounds on the generalisation error in variational QML, confirming how known implementable models generalize well from an efficient amount ofTraining data.

Equivalence of quantum barren plateaus to cost concentration and narrow gorges

This work analytically proves the connection between three different landscape features that have been observed for PQCs: exponentially vanishing gradients, exponential cost concentration about the mean, and the exponential narrowness of minina.



Power of data in quantum machine learning

This work shows that some problems that are classically hard to compute can be easily predicted by classical machines learning from data and proposes a projected quantum model that provides a simple and rigorous quantum speed-up for a learning problem in the fault-tolerant regime.

Expressibility and trainability of parametrized analog quantum systems for machine learning applications

This work investigates how the interplay between external driving and disorder in the system dictates the trainability and expressibility of interacting quantum systems, and devise a protocol using quenched MBL dynamics which allows accurate trainability while keeping the overall dynamics in the quantum supremacy regime.

Quantum Generative Training Using R\'enyi Divergences

This work examines the assumptions that give rise to barren plateaus and shows that an unbounded loss function can circumvent the existing no-go results and proposes a training algorithm that minimizes the maximal Rényi divergence of order two and presents techniques for gradient computation.

Quantum embeddings for machine learning

This work proposes to train the first part of the circuit with the objective of maximally separating data classes in Hilbert space, a strategy it calls quantum metric learning, which provides a powerful analytic framework for quantum machine learning.

The power of quantum neural networks

This work is the first to demonstrate that well-designed quantum neural networks offer an advantage over classical neural networks through a higher effective dimension and faster training ability, which is verified on real quantum hardware.

Optimal training of variational quantum algorithms without barren plateaus

This work identifies a VQA for quantum simulation with such a constraint that thus can be trained free of barren plateaus and introduces the generalized quantum natural gradient that features stability and optimized movement in parameter space.

Information-theoretic bounds on quantum advantage in machine learning

It is proven that for any input distribution D(x), a classical ML model can provide accurate predictions on average by accessing E a number of times comparable to the optimal quantum ML model, and it is proved that the exponential quantum advantage is possible.

Quantum circuit architecture search: error mitigation and trainability enhancement for variational quantum solvers

QAS implicitly learns a rule that can well suppress the influence of quantum noise and the barren plateau and is implemented on both the numerical simulator and real quantum hardware via the IBM cloud to accomplish the data classification and the quantum ground state approximation tasks.

Classification with Quantum Neural Networks on Near Term Processors

This work introduces a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning, and shows through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets.

Training deep quantum neural networks

A noise-robust architecture for a feedforward quantum neural network, with qudits as neurons and arbitrary unitary operations as perceptrons, whose training procedure is efficient in the number of layers.