A Quantum Generative Adversarial Network for distributions

@article{Assouel2021AQG,
  title={A Quantum Generative Adversarial Network for distributions},
  author={Amine Assouel and Antoine Jacquier and Alex Kondratyev},
  journal={CGN: Risk Management},
  year={2021}
}
Generative Adversarial Networks are becoming a fundamental tool in Machine Learning, in particular in the context of improving the stability of deep neural networks. At the same time, recent advances in Quantum Computing have shown that, despite the absence of a fault-tolerant quantum computer so far, quantum techniques are providing exponential advantage over their classical counterparts. We develop a fully connected Quantum Generative Adversarial network and show how it can be applied in… 

References

SHOWING 1-10 OF 44 REFERENCES
Quantum generative adversarial networks
TLDR
This work extends adversarial training to the quantum domain and shows how to construct generative adversarial networks using quantum circuits, as well as showing how to compute gradients -- a key element in generatives adversarial network training -- using another quantum circuit.
Quantum generative adversarial learning in a superconducting quantum circuit
TLDR
It is demonstrated that, after several rounds of adversarial learning, a quantum-state generator can be trained to replicate the statistics of the quantum data output from a quantum channel simulator, with a high fidelity so that the discriminator cannot distinguish between the true and the generated data.
Quantum Generative Adversarial Learning.
TLDR
The notion of quantum generative adversarial networks is introduced, where the data consist either of quantum states or of classical data, and the generator and discriminator are equipped with quantum information processors, and it is shown that the unique fixed point of the quantum adversarial game also occurs when the generator produces the same statistics as the data.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Generative adversarial networks for financial trading strategies fine-tuning and combination
TLDR
This work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategy calibration and aggregation, and suggests that cGANs are a suitable alternative for strategy calibrated and combination, providing outperformance when the traditional techniques fail to generate any alpha.
Barren plateaus in quantum neural network training landscapes
TLDR
It is shown that for a wide class of reasonable parameterized quantum circuits, the probability that the gradient along any reasonable direction is non-zero to some fixed precision is exponentially small as a function of the number of qubits.
Generative Modeling Using the Sliced Wasserstein Distance
TLDR
This work considers an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation and finds its approach to be significantly more stable compared to even the improved Wasserstein GAN.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Evaluating analytic gradients on quantum hardware
An important application for near-term quantum computing lies in optimization tasks, with applications ranging from quantum chemistry and drug discovery to machine learning. In many settings --- most
...
1
2
3
4
5
...