• Corpus ID: 238419267

Universal Approximation Under Constraints is Possible with Transformers

  title={Universal Approximation Under Constraints is Possible with Transformers},
  author={Anastasis Kratsios and Behnoosh Zamanlooy and Tianlin Liu and Ivan Dokmani'c},
Many practical problems need the output of a machine learning model to satisfy a set of constraints, K. There are, however, no known guarantees that classical neural networks can exactly encode constraints while simultaneously achieving universality. We provide a quantitative constrained universal approximation theorem which guarantees that for any convex or non-convex compact set K and any continuous function f : R → K, there is a probabilistic transformer F̂ whose randomized outputs all lie… 

Figures and Tables from this paper


Universal Approximation with Deep Narrow Networks
The classical Universal Approximation Theorem holds for neural networks of arbitrary width and bounded depth, and nowhere differentiable activation functions, density in noncompact domains with respect to the $L^p$-norm, and how the width may be reduced to just $n + m + 1$ for `most' activation functions.
Error bounds for approximations with deep ReLU neural networks in $W^{s, p}$ norms
This work constructs, based on a calculus of ReLU networks, artificial neural networks with ReLU activation functions that achieve certain approximation rates and establishes lower bounds for the approximation by ReLU neural networks for classes of Sobolev-regular functions.
Are Transformers universal approximators of sequence-to-sequence functions?
It is established that Transformer models are universal approximators of continuous permutation equivariant sequence-to-sequence functions with compact support, which is quite surprising given the amount of shared parameters in these models.
What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?
A proper mathematical definition of local optimality for this sequential setting---local minimax is proposed, as well as its properties and existence results are presented.
Extending Lipschitz functions via random metric partitions
Many classical problems in geometry and analysis involve the gluing together of local information to produce a coherent global picture. Inevitably, the difficulty of such a procedure lies at the
Linear extension operators between spaces of Lipschitz maps and optimal transport
Abstract Motivated by the notion of K {K\hskip-0.284528pt} -gentle partition of unity introduced in [J. R. Lee and A. Naor, Extending Lipschitz functions via random metric partitions, Invent. Math.
Equivalence of approximation by convolutional neural networks and fully-connected networks
This paper establishes a connection between both network architectures and shows that all upper and lower bounds concerning approximation rates of fully-connected neural networks for functions f for an arbitrary function class $\mathcal{C}$translate to essentially the same bounds on approximation rates.
Optimal Transport: Fast Probabilistic Approximation with Exact Solvers
A simple subsampling scheme for fast randomized approximate computation of optimal transport distances based on averaging the exact distances between empirical measures generated from independent samples from the original measures and can be tuned towards higher accuracy or shorter computation times is proposed.
Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max Optimization
We provide a first-order oracle complexity lower bound for finding stationary points of min-max optimization problems where the objective function is smooth, nonconvex in the minimization variable,
Low-Rank plus Sparse Decomposition of Covariance Matrices using Neural Network Parametrization
This article revisits the problem of decomposing a positive semidefinite matrix as a sum of a matrix with a given rank plus a sparse matrix and deduces its convergence rate to a local optimum from the Lipschitz smoothness of the loss function.