• Corpus ID: 222140724

On the Universality of Rotation Equivariant Point Cloud Networks

  title={On the Universality of Rotation Equivariant Point Cloud Networks},
  author={Nadav Dym and Haggai Maron},
Learning functions on point clouds has applications in many fields, including computer vision, computer graphics, physics, and chemistry. Recently, there has been a growing interest in neural architectures that are invariant or equivariant to all three shape-preserving transformations of point clouds: translation, rotation, and permutation. In this paper, we present a first study of the approximation power of these architectures. We first derive two sufficient conditions for an equivariant… 

ZZ-Net: A Universal Rotation Equivariant Architecture for 2D Point Clouds

A novel neural network architecture is proposed for processing 2D point clouds and its universality for approximating functions exhibiting any continuous rotation equivariant and permutation invariant function is proved.

A Simple and Universal Rotation Equivariant Point-cloud Network

A much simpler architecture is suggested, it is proved that it enjoys the same universality guarantees and its performance on Modelnet40 is evaluated.

A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups

This work provides a completely general algorithm for solving for the equivariant layers of matrix groups and constructs multilayer perceptrons equivariants to multiple groups that have never been tackled before, including the Rubik’s cube group.

Equivariant Networks for Pixelized Spheres

This paper shows how to model the interplay between the two levels of symmetry transformations using ideas from group theory, identify the equivariant linear maps, and introduce equivariants padding that respects these symmetries.

Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Invariance and equivariance to the rotation group have been widely discussed in the 3D deep learning community for pointclouds. Yet most proposed methods either use complex mathematical tools that

SE(3) Equivariant Graph Neural Networks with Complete Local Frames

Inspired by differential geometry and physics, equivariant local complete frames are introduced to graph neural networks, such that tensor information at given orders can be projected onto the frames, and the method is computationally efficient.

Frame Averaging for Invariant and Equivariant Network Design

Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural network

Scalars are universal: Equivariant machine learning, structured like classical physics

It is shown that it is simple to parameterize universally approximating polynomial functions that are equivariant under these symmetries, or under the Euclidean, Lorentz, and Poincar´e groups, at any dimensionality d .

Barron’s Theorem for Equivariant Networks

This work demonstrates that for some commonly used groups, there exist smooth subclasses of functions — analogous to Barron classes offunction — which can be efficiently approximated using invariant architectures, thereby providing approximation results that are not only invariant, but efficient.

Unified Fourier-based Kernel and Nonlinearity Design for Equivariant Networks on Homogeneous Spaces

We introduce a unified framework for group equivariant networks on homogeneous spaces derived from a Fourier perspective. We consider tensor-valued feature fields, before and after a convolutional



Discrete Rotation Equivariance for Point Cloud Recognition

A deep learning architecture that achieves discrete SO(2)/ SO(3) rotation equivariance for point cloud recognition is proposed, which can be directly applied to any existing point cloud based networks, resulting in significant improvements in their performance for rotated inputs.

Tensor Field Networks: Rotation- and Translation-Equivariant Neural Networks for 3D Point Clouds

Tensor field neural networks are introduced, which are locally equivariant to 3D rotations, translations, and permutations of points at every layer, and demonstrate the capabilities of tensor field networks with tasks in geometry, physics, and chemistry.

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

Quaternion Equivariant Capsule Networks for 3D Point Clouds

A 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points, and builds a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space.

3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data

The experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.

Invariant and Equivariant Graph Networks

This paper provides a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and shows that their dimension, in case of edge-value graph data, is 2 and 15, respectively.

Effective Rotation-Invariant Point CNN with Spherical Harmonics Kernels

This work demonstrates how rotation invariance can be injected into a recently proposed point-based PCNN architecture, on all layers of the network, and achieves accurate results on challenging shape analysis tasks, without requiring data-augmentation typically employed by non-invariant approaches.

Dynamic Graph CNN for Learning on Point Clouds

This work proposes a new neural network module suitable for CNN-based high-level tasks on point clouds, including classification and segmentation called EdgeConv, which acts on graphs dynamically computed in each layer of the network.

Point convolutional neural networks by extension operators

Evaluation of PCNN on three central point cloudlearning benchmarks convincingly outperform competing point cloud learning methods, and the vast majority of methods working with more informative shape representations such as surfaces and/or normals.

Learning SO(3) Equivariant Representations with Spherical CNNs

It is shown that networks with much lower capacity and without requiring data augmentation can exhibit performance comparable to the state of the art in standard 3D shape retrieval and classification benchmarks.