• Corpus ID: 246210372

# Evaluating Generalization in Classical and Quantum Generative Models

@article{Gili2022EvaluatingGI,
title={Evaluating Generalization in Classical and Quantum Generative Models},
author={Kaitlin Gili and Marta Mauri and Alejandro Perdomo-Ortiz},
journal={ArXiv},
year={2022},
volume={abs/2201.08770}
}
• Published 21 January 2022
• Computer Science
• ArXiv
Deﬁning and accurately measuring generalization in generative models remains an ongoing challenge and a topic of active research within the machine learning community. This is in contrast to discriminative models, where there is a clear deﬁnition of generalization, i.e., the model’s classiﬁcation accuracy when faced with unseen data. In this work, we construct a simple and unambiguous approach to evaluate the generalization capabilities of generative models. Using the sample-based…
11 Citations

## Figures and Tables from this paper

• Computer Science
• 2022
This work encodes arbitrary integer-valued equality constraints of the form A(cid:126)x = (cid):126, directly into U (1) symmetric tensor networks (TNs) and leverages their applicability as quantum-inspired generative models to assist in the search of solutions to combinatorial optimization problems.
• History
• 2023
It is empirically found that a variant of the \emph{discrete} architecture, which learns the copula of the probability distribution, outperforms all other methods.
• Computer Science
ArXiv
• 2022
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits, and shows that, for a wide variety of the most practically relevant learning algorithms – including hybrid-quantum classical algorithms – even the generative modelling problem associated with depth d = ω (log( n )) Cli ﬀ ord circuits is hard.
• Computer Science, Physics
ArXiv
• 2022
This paper interprets these QGLMs, covering quantum circuit Born machines, quantum generative adversarial networks, quantum Boltzmann machines, and quantum autoencoders, as the quantum extension of classical generative learning models, and explores their intrinsic relation and their fundamental differences.
• Computer Science
ArXiv
• 2022
It is shown that non-linearity is a useful resource in quantum generative models, and the QNBM is put forth as a new model with good generative performance and potential for quantum advantage.
• Physics
• 2022
The intrinsic probabilistic nature of quantum mechanics invokes endeavors of designing quantum generative learning models (QGLMs). Despite the empirical achievements, the foundations and the
• Computer Science
• 2022
This work further explores the exploration of the parameter space of the SMSQQ model, and updates the maximum mass of a dark matter singlet to 48.4 TeV, showing that this technique is especially useful in more complex models like the MDGSSM.

## References

SHOWING 1-10 OF 69 REFERENCES

• Computer Science
KDD
• 2019
New design-criteria for next-generation hyperparameter optimization software are introduced, including define-by-run API that allows users to construct the parameter search space dynamically, and easy-to-setup, versatile architecture that can be deployed for various purposes.
• Computer Science
ICML
• 2022
This paper introduces a 3-dimensional metric that characterizes the fidelity, diversity and generalization performance of any generative model in a wide variety of application domains, and introduces generalization as an additional dimension for model performance that quantifies the extent to which a model copies training data.
• Computer Science
NeurIPS
• 2018
A framework to systematically investigate bias and generalization in deep generative models of images is proposed and inspired by experimental methods from cognitive psychology to characterize when and how existing models generate novel attributes and their combinations.
• Computer Science
Physical Review X
• 2018
This work proposes a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states, and enjoys efficient learning analogous to the density matrix renormalization group method.
• Computer Science
• 2021
This work introduces a new family of quantum-enhanced optimizers and demonstrates how quantum machine learning models known as quantum generative models can find lower minima than those found by means of stand-alone state-of-the-art classical solvers.
• Computer Science
npj Quantum Information
• 2019
A quantum circuit learning algorithm that can be used to assist the characterization of quantum devices and to train shallow circuits for generative tasks is proposed and it is demonstrated that this approach can learn an optimal preparation of the Greenberger-Horne-Zeilinger states.
• Computer Science
ArXiv
• 2018
This paper comprehensively investigates existing sample-based evaluation metrics for GANs and observes that kernel Maximum Mean Discrepancy and the 1-Nearest-Neighbor (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space.
• Computer Science
NIPS
• 2017
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.
• Computer Science
ICLR
• 2016
This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.
• Computer Science
Mach. Learn. Sci. Technol.
• 2020
A comparison of the widely used classical ML models known as restricted Boltzmann machines (RBMs) against a recently proposed quantum model, now known as quantum circuit Born machines (QCBMs), finds that the quantum models seem to have superior performance on typical instances when compared with the canonical training of the RBMs.