Universal Approximations of Invariant Maps by Neural Networks

@article{Yarotsky2018UniversalAO,
  title={Universal Approximations of Invariant Maps by Neural Networks},
  author={Dmitry Yarotsky},
  journal={Constructive Approximation},
  year={2018},
  volume={55},
  pages={407-474}
}
  • D. Yarotsky
  • Published 26 April 2018
  • Mathematics, Computer Science
  • Constructive Approximation
We describe generalizations of the universal approximation theorem for neural networks to maps invariant or equivariant with respect to linear representations of groups. Our goal is to establish network-like computational models that are both invariant/equivariant and provably complete in the sense of their ability to approximate any continuous invariant/equivariant map. Our contribution is three-fold. First, in the general case of compact groups we propose a construction of a complete… 
Universal Invariant and Equivariant Graph Neural Networks
TLDR
The results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.
Universal Approximation Theorem for Equivariant Maps by Group CNNs
TLDR
This paper provides a unified method to obtain universal approximation theorems for equivariant maps by CNNs in various settings and can handle non-linear equivariants between infinite-dimensional spaces for non-compact groups.
Universal approximations of permutation invariant/equivariant functions by deep neural networks
TLDR
It is concluded that although the free parameters of the invariant/equivarint models are exponentially fewer than the one of the usual models, the invarian/equivariant models can approximate the invariants/Equivariant functions to arbitrary accuracy.
On the Universality of Invariant Networks
TLDR
This paper concludes by proving a necessary condition for the universality of G-invariant networks that incorporate only first-order tensors, which are of special interest due to their practical value.
Scalars are universal: Equivariant machine learning, structured like classical physics
TLDR
It is shown that it is simple to parameterize universally approximating polynomial functions that are equivariant under these symmetries, or under the Euclidean, Lorentz, and Poincaré groups, at any dimensionality d.
A Simple Proof of the Universality of Invariant/Equivariant Graph Neural Networks
TLDR
This work considers a restricted intermediate hypothetical model named Graph Homomorphism Model to reach the universality conclusions including an open case for higher-order output and finds that the proposed technique not only leads to simple proofs of the universalities properties but also gives a natural explanation for the tensorization of the previously studied models.
Equivariant and Invariant Reynolds Networks
TLDR
This paper constructs learning models based on the reductive Reynolds operator called equivariant and invariant Reynolds networks (ReyNets) and proves that they have universal approximation property.
Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?
TLDR
It is found that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action, and it is shown how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling.
On Universal Equivariant Set Networks
TLDR
It is proved that PointNet is not equivariant universal; and adding a single linear transmission layer makes PointNet universal, and the architecture called PointNetST is argued to be the simplest permutation equivariants universal model known to date.
U NIVERSAL APPROXIMATIONS OF PERMUTATION INVARIANT / EQUIVARIANT FUNCTIONS BY DEEP NEURAL NETWORKS
  • Mathematics, Computer Science
  • 2019
TLDR
A theory about the relationship between Ginvariant/equivariant functions and deep neural networks for finite group G is developed and it is shown that this universal approximator has exponentially fewer free parameters than usual models.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 50 REFERENCES
Rotation Equivariant Vector Field Networks
TLDR
The Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network architecture encoding rotation equivariance, invariance and covariance, is proposed and a modified convolution operator relying on this representation to obtain deep architectures is developed.
Deep Symmetry Networks
TLDR
Deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups that uses kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension are introduced.
Zeros of Equivariant Vector Fields: Algorithms for an Invariant Approach
  • P. Worfolk
  • Mathematics, Computer Science
    J. Symb. Comput.
  • 1994
TLDR
A computationally effective algorithm to solve for the zeros of a polynomial vector field equivariant with respect to a finite subgroup of O (n) is presented and it is proved that the module of equivariants is Cohen-Macaulay.
Convolutional Rectifier Networks as Generalized Tensor Decompositions
TLDR
Developing effective methods for training convolutional arithmetic circuits may give rise to a deep learning architecture that is provably superior to Convolutional rectifier networks, which has so far been overlooked by practitioners.
Polar Transformer Networks
TLDR
PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations and is a network invariant to translation and equivariant to both rotation and scale, which is extensible to 3D which is demonstrated through the Cylindrical Trans transformer Network.
Group Equivariant Convolutional Networks
TLDR
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST.
Warped Convolutions: Efficient Invariance to Spatial Transformations
TLDR
This work presents a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy, consisting of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes.
Exploiting Cyclic Symmetry in Convolutional Neural Networks
TLDR
This work introduces four operations which can be inserted into neural network models as layers, andWhich can be combined to make these models partially equivariant to rotations, and which enable parameter sharing across different orientations.
The classical groups : their invariants and representations
In this renowned volume, Hermann Weyl discusses the symmetric, full linear, orthogonal, and symplectic groups and determines their different invariants and representations. Using basic concepts from
...
1
2
3
4
5
...