• Corpus ID: 246210372

Evaluating Generalization in Classical and Quantum Generative Models

  title={Evaluating Generalization in Classical and Quantum Generative Models},
  author={Kaitlin Gili and Marta Mauri and Alejandro Perdomo-Ortiz},
Defining and accurately measuring generalization in generative models remains an ongoing challenge and a topic of active research within the machine learning community. This is in contrast to discriminative models, where there is a clear definition of generalization, i.e., the model’s classification accuracy when faced with unseen data. In this work, we construct a simple and unambiguous approach to evaluate the generalization capabilities of generative models. Using the sample-based… 

Symmetric Tensor Networks for Generative Modeling and Constrained Combinatorial Optimization

This work encodes arbitrary integer-valued equality constraints of the form A(cid:126)x = (cid):126, directly into U (1) symmetric tensor networks (TNs) and leverages their applicability as quantum-inspired generative models to assist in the search of solutions to combinatorial optimization problems.

A performance characterization of quantum generative models

It is empirically found that a variant of the \emph{discrete} architecture, which learns the copula of the probability distribution, outperforms all other methods.

Introducing nonlinear activations into quantum generative models


A single T-gate makes distribution learning hard

This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits, and shows that, for a wide variety of the most practically relevant learning algorithms – including hybrid-quantum classical algorithms – even the generative modelling problem associated with depth d = ω (log( n )) Cli ff ord circuits is hard.

Recent Advances for Quantum Neural Networks in Generative Learning

This paper interprets these QGLMs, covering quantum circuit Born machines, quantum generative adversarial networks, quantum Boltzmann machines, and quantum autoencoders, as the quantum extension of classical generative learning models, and explores their intrinsic relation and their fundamental differences.

Introducing Non-Linearity into Quantum Generative Models

It is shown that non-linearity is a useful resource in quantum generative models, and the QNBM is put forth as a new model with good generative performance and potential for quantum advantage.

Power of Quantum Generative Learning

The intrinsic probabilistic nature of quantum mechanics invokes endeavors of designing quantum generative learning models (QGLMs). Despite the empirical achievements, the foundations and the

Active learning BSM parameter spaces

This work further explores the exploration of the parameter space of the SMSQQ model, and updates the maximum mass of a dark matter singlet to 48.4 TeV, showing that this technique is especially useful in more complex models like the MDGSSM.



Optuna: A Next-generation Hyperparameter Optimization Framework

New design-criteria for next-generation hyperparameter optimization software are introduced, including define-by-run API that allows users to construct the parameter search space dynamically, and easy-to-setup, versatile architecture that can be deployed for various purposes.

How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models

This paper introduces a 3-dimensional metric that characterizes the fidelity, diversity and generalization performance of any generative model in a wide variety of application domains, and introduces generalization as an additional dimension for model performance that quantifies the extent to which a model copies training data.

Bias and Generalization in Deep Generative Models: An Empirical Study

A framework to systematically investigate bias and generalization in deep generative models of images is proposed and inspired by experimental methods from cognitive psychology to characterize when and how existing models generate novel attributes and their combinations.

Unsupervised Generative Modeling Using Matrix Product States

This work proposes a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states, and enjoys efficient learning analogous to the density matrix renormalization group method.

Enhancing Combinatorial Optimization with Quantum Generative Models

This work introduces a new family of quantum-enhanced optimizers and demonstrates how quantum machine learning models known as quantum generative models can find lower minima than those found by means of stand-alone state-of-the-art classical solvers.

A generative modeling approach for benchmarking and training shallow quantum circuits

A quantum circuit learning algorithm that can be used to assist the characterization of quantum devices and to train shallow circuits for generative tasks is proposed and it is demonstrated that this approach can learn an optimal preparation of the Greenberger-Horne-Zeilinger states.

An empirical study on evaluation metrics of generative adversarial networks

This paper comprehensively investigates existing sample-based evaluation metrics for GANs and observes that kernel Maximum Mean Discrepancy and the 1-Nearest-Neighbor (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.

A note on the evaluation of generative models

This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.

Classical versus quantum models in machine learning: insights from a finance application

A comparison of the widely used classical ML models known as restricted Boltzmann machines (RBMs) against a recently proposed quantum model, now known as quantum circuit Born machines (QCBMs), finds that the quantum models seem to have superior performance on typical instances when compared with the canonical training of the RBMs.