• Corpus ID: 906304

Graph-based Isometry Invariant Representation Learning

@article{Khasanova2017GraphbasedII,
  title={Graph-based Isometry Invariant Representation Learning},
  author={Renata Khasanova and Pascal Frossard},
  journal={ArXiv},
  year={2017},
  volume={abs/1703.00356}
}
Learning transformation invariant representations of visual data is an important problem in computer vision. Deep convolutional networks have demonstrated remarkable results for image and video classification tasks. However, they have achieved only limited success in the classification of images that undergo geometric transformations. In this work we present a novel Transformation Invariant Graph-based Network (TIGraNet), which learns graph-based features that are inherently invariant to… 

Figures and Tables from this paper

Isometric Transformation Invariant Graph-based Deep Neural Network

A novel Transformation Invariant Graph-based Network (TIGraNet), which learns graph-based features that are inherently invariant to isometric transformations such as rotation and translation of input images.

GIFT: Learning Transformation-Invariant Dense Visual Descriptors via Group CNNs

A novel visual descriptor named Group Invariant Feature Transform (GIFT) is introduced, which is both discriminative and robust to geometric transformations and outperforms state-of-the-art methods on several benchmark datasets and practically improves the performance of relative pose estimation.

Equivariance-bridged SO(2)-Invariant Representation Learning using Graph Convolutional Network

The proposed deep equivariance-bridged SO(2) invariant network achieves the state-of-the-art image classification performance on rotated MNIST and CIFAR-10 images, where the models are trained with a non-augmented dataset only.

Image Classification with Hierarchical Multigraph Networks

This work shows best practices for designing GCNs for image classification; in some cases even outperforming CNNs on the MNIST, CIFAR-10 and PASCAL image datasets.

Non-Parametric Transformation Networks for Learning General Invariances from Data

This paper introduces a new class of deep convolutional architectures called Non-Parametric Transformation Networks (NPTNs) which can learn general invariances and symmetries directly from data and replaces ConvNets with NPTNs within Capsule Networks and shows that this enables Capsule Nets to perform even better.

Non-Parametric Transformation Networks

A new class of convolutional architectures called Non-Parametric Transformation Networks (NPTNs) which can learn general invariances and symmetries directly from data directly using gradient descent is introduced.

Graph-Based Classification of Omnidirectional Images

  • P. FrossardR. Khasanova
  • Computer Science
    2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
  • 2017
This paper proposes a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions, and shows that the proposed method outperforms current techniques for the omnidirectional image classification problem.

Improving Spectral Graph Convolution for Learning Graph-level Representation

This work serves as a spatial understanding that quantitatively measures the effects of the spectrum to input signals in comparison to the well-known spectral understanding as high/low-pass filters and sheds the light on developing powerful graph representation models.

Learning Non-Parametric Invariances from Data with Permanent Random Connectomes

A new architectural layer for convolutional networks which is capable of learning general invariances from data itself, and interestingly, motivates and incorporates permanent random connectomes, thereby being called Permanent Random Connectome Non-Parametric Transformation Networks (PRC-NPTN).

Graph Pooling with Node Proximity for Hierarchical Representation Learning

A novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology with the combination of the affine transformation and kernel trick using the Gaussian RBF function.

References

SHOWING 1-10 OF 38 REFERENCES

Spectral Networks and Locally Connected Networks on Graphs

This paper considers possible generalizations of CNNs to signals defined on more general domains without the action of a translation group, and proposes two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian.

Harmonic Networks: Deep Translation and Rotation Equivariance

H-Nets are presented, a CNN exhibiting equivariance to patch-wise translation and 360-rotation, and it is demonstrated that their layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization.

Exploiting Cyclic Symmetry in Convolutional Neural Networks

This work introduces four operations which can be inserted into neural network models as layers, andWhich can be combined to make these models partially equivariant to rotations, and which enable parameter sharing across different orientations.

TI-POOLING: Transformation-Invariant Pooling for Feature Learning in Convolutional Neural Networks

A deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING) that is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes is presented.

Spectral Representations for Convolutional Neural Networks

This work proposes spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain, and demonstrates the effectiveness of complex-coefficient spectral parameterization of convolutional filters.

Geodesic Convolutional Neural Networks on Riemannian Manifolds

Geodesic Convolutional Neural Networks (GCNN), a generalization of the convolutional neural networks (CNN) paradigm to non-Euclidean manifolds is introduced, allowing to achieve state-of-the-art performance in problems such as shape description, retrieval, and correspondence.

Spatial Transformer Networks

This work introduces a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network, and can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps.

Learning rotation invariant convolutional filters for texture classification

Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.

Deep roto-translation scattering for object classification

A deep scattering convolution network, with complex wavelet filters over spatial and angular variables is introduced, which shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.

Rotation-Invariant Neoperceptron

By extending the weight-sharing properties of convolutional neural networks to orientations, this paper obtains a neural network that is inherently robust to object rotations, while still being capable to learn optimally discriminant features from training data.