Geometric Deep Learning: Going beyond Euclidean data

@article{Bronstein2017GeometricDL,
  title={Geometric Deep Learning: Going beyond Euclidean data},
  author={Michael M. Bronstein and Joan Bruna and Yann LeCun and Arthur D. Szlam and Pierre Vandergheynst},
  journal={IEEE Signal Processing Magazine},
  year={2017},
  volume={34},
  pages={18-42}
}
Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular… 

Figures from this paper

Statistical Recurrent Models on Manifold valued Data

This work shows how statistical recurrent network models can be defined in non-Euclidean spaces and gives an efficient algorithm and conducts a rigorous analysis of its statistical properties.

Convolutional Neural Networks on Manifolds: From Graphs and Back

This paper proposes a manifold neural network (MNN) composed of a bank of manifold convolutional and point-wise nonlinearities and develops a manifold Convolution operation which is consistent with the discrete graph convolution by discretizing in both space and time domains.

TOWARDS THEORETICAL UNDERSTANDING OF GEOMETRIC DEEP LEARNING WITH GEOMETRIC WAVELET SCATTERING TRANSFORMS Speaker: Michael Perlmutter; Co-authors:

This proposed talk focuses on the theoretical understanding of geometric deep learning, which is revolutionizing several learning tasks on non-Euclidean data, including graph classification, generative graph and manifold models, shape retrieval, and shape alignment.

Effects of Data Geometry in Early Deep Learning

This work extends recent advances in the theoretical understanding of neural networks and derives bounds on the density of boundary of linear regions and the distance to these boundaries on the data manifold to insights into the expressivity of randomly initialized deep neural networks on non-Euclidean data sets.

A Statistical Recurrent Model on the Manifold of Symmetric Positive Definite Matrices

This work shows how recurrent statistical recurrent network models can be defined in non-Euclidean spaces, and gives an efficient algorithm and conducts a rigorous analysis of its statistical properties.

Dilated Convolutional Neural Networks for Sequential Manifold-Valued Data

A dilated convolutional neural network architecture is developed for group difference analysis in Alzheimer's disease where the groups are derived using AD pathology load and it is shown how the modules needed in the network can be derived while explicitly taking the Riemannian manifold structure into account.

Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs

This paper proposes a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features and test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.

ManifoldNet: A Deep Network Framework for Manifold-valued Data

A novel theoretical framework to generalize the widely popular convolutional neural networks (CNNs) to high dimensional manifold-valued data inputs, called ManifoldNets, and it is proved that the proposed wFM layer achieves a contraction mapping and hence Manifolds does not need the non-linear ReLU unit used in standard CNNs.

Graph Neural Networks for IceCube Signal Classification

This work leverages graph neural networks to improve signal detection in the IceCube neutrino observatory and demonstrates the effectiveness of the GNN architecture on a task classifying IceCube events, where it outperforms both a traditional physics-based method as well as classical 3D convolution neural networks.

Data-Driven Learning of Geometric Scattering Networks

This work proposes an alternative GNN architecture, based on a relaxation of recently proposed geometric scattering transforms, which consists of a cascade of graph wavelet filters, which results in a simplified GNN with significantly fewer learned parameters compared to competing methods.
...

References

SHOWING 1-10 OF 126 REFERENCES

Deep Convolutional Networks on Graph-Structured Data

This paper develops an extension of Spectral Networks which incorporates a Graph Estimation procedure, that is test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.

Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs

This paper proposes a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features and test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.

Geodesic Convolutional Neural Networks on Riemannian Manifolds

Geodesic Convolutional Neural Networks (GCNN), a generalization of the convolutional neural networks (CNN) paradigm to non-Euclidean manifolds is introduced, allowing to achieve state-of-the-art performance in problems such as shape description, retrieval, and correspondence.

Spectral Networks and Locally Connected Networks on Graphs

This paper considers possible generalizations of CNNs to signals defined on more general domains without the action of a translation group, and proposes two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian.

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

This work presents a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs.

The Graph Neural Network Model

A new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains, and implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space.

A global geometric framework for nonlinear dimensionality reduction.

An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.

Topology and Geometry of Half-Rectified Network Optimization

The main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and an algorithm is introduced to efficiently estimate the regularity of such sets on large-scale networks.

Wavelets on Graphs via Deep Learning

A machine learning framework for constructing graph wavelets that can sparsely represent a given class of signals and a linear wavelet transform that can be applied to any graph signal in time and memory linear in the size of the graph is introduced.

Multiscale Wavelets on Trees, Graphs and High Dimensional Data: Theory and Applications to Semi Supervised Learning

It is proved that in analogy to the Euclidean case, function smoothness with respect to a specific metric induced by the tree is equivalent to exponential rate of coefficient decay, that is, to approximate sparsity, which readily translate to simple practical algorithms for various learning tasks.
...