A Quantum Generative Adversarial Network for distributions

  title={A Quantum Generative Adversarial Network for distributions},
  author={Amine Assouel and Antoine Jacquier and Alex Kondratyev},
  journal={CGN: Risk Management},
Generative Adversarial Networks are becoming a fundamental tool in Machine Learning, in particular in the context of improving the stability of deep neural networks. At the same time, recent advances in Quantum Computing have shown that, despite the absence of a fault-tolerant quantum computer so far, quantum techniques are providing exponential advantage over their classical counterparts. We develop a fully connected Quantum Generative Adversarial network and show how it can be applied in… 


Quantum generative adversarial networks
This work extends adversarial training to the quantum domain and shows how to construct generative adversarial networks using quantum circuits, as well as showing how to compute gradients -- a key element in generatives adversarial network training -- using another quantum circuit.
Quantum generative adversarial learning in a superconducting quantum circuit
It is demonstrated that, after several rounds of adversarial learning, a quantum-state generator can be trained to replicate the statistics of the quantum data output from a quantum channel simulator, with a high fidelity so that the discriminator cannot distinguish between the true and the generated data.
Quantum Generative Adversarial Learning.
The notion of quantum generative adversarial networks is introduced, where the data consist either of quantum states or of classical data, and the generator and discriminator are equipped with quantum information processors, and it is shown that the unique fixed point of the quantum adversarial game also occurs when the generator produces the same statistics as the data.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Generative adversarial networks for financial trading strategies fine-tuning and combination
This work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategy calibration and aggregation, and suggests that cGANs are a suitable alternative for strategy calibrated and combination, providing outperformance when the traditional techniques fail to generate any alpha.
Barren plateaus in quantum neural network training landscapes
It is shown that for a wide class of reasonable parameterized quantum circuits, the probability that the gradient along any reasonable direction is non-zero to some fixed precision is exponentially small as a function of the number of qubits.
Generative Modeling Using the Sliced Wasserstein Distance
This work considers an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation and finds its approach to be significantly more stable compared to even the improved Wasserstein GAN.
Improved Training of Wasserstein GANs
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Evaluating analytic gradients on quantum hardware
This paper shows how gradients of expectation values of quantum measurements can be estimated using the same, or almost the same the architecture that executes the original circuit, and proposes recipes for the computation of gradients for continuous-variable circuits.