# An Unsupervised Algorithm For Learning Lie Group Transformations

@article{SohlDickstein2010AnUA, title={An Unsupervised Algorithm For Learning Lie Group Transformations}, author={Jascha Sohl-Dickstein and Jimmy C. Wang and Bruno A. Olshausen}, journal={ArXiv}, year={2010}, volume={abs/1001.1027} }

We present several theoretical contributions which allow Lie groups to be fit to high dimensional datasets. Transformation operators are represented in their eigen-basis, reducing the computational complexity of parameter estimation to that of training a linear transformation model. A transformation specific "blurring" operator is introduced that allows inference to escape local minima via a smoothing of the transformation space. A penalty on traversed manifold distance is added which…

## 31 Citations

Disentangling images with Lie group transformations and sparse coding

- Computer ScienceArXiv
- 2020

A Bayesian generative model that learns to disentangle spatial patterns and their continuous transformations in a completely unsupervised manner and can recover these transformations along with the digits.

Learning Identity-Preserving Transformations on Data Manifolds

- Computer ScienceArXiv
- 2021

This work introduces a learning strategy that does not require transformation labels and develops a method that learns the local regions where each operator is likely to be used while preserving the identity of inputs.

Learning Transformation Groups and their Invariants

- Mathematics
- 2013

A fundamental problem in vision is that of invariance: how objects are perceived to be essentially the same despite having undergone various transformations. When it is known a priori to which…

Unsupervised Transformation Learning via Convex Relaxations

- Computer Science, MathematicsNIPS
- 2017

This work proposes an unsupervised approach to learn meaningful transformations from raw images by attempting to reconstruct an image from a linear combination of transformations of its nearest neighbors, and shows that even with linear transformations, the method generates visually high-quality modified images.

Transformational Sparse Coding

- Computer ScienceArXiv
- 2017

This work proposes a new model of unsupervised learning based on sparse coding that can learn object features jointly with their affine transformations directly from images, and results indicate that this approach matches the reconstruction quality of traditional sparse coding but with significantly fewer degrees of freedom while simultaneously learning transformations from data.

Learning Image Transformations without Training Examples

- Computer ScienceISVC
- 2011

This paper presents a simple method for learning affine and elastic transformations when no examples of these transformations are explicitly given, and no prior knowledge of space (such as ordering of pixels) is included either.

Disentangling Patterns and Transformations from One Sequence of Images with Shape-invariant Lie Group Transformer

- Computer ScienceArXiv
- 2022

A model is proposed that disentangles the scenes into the minimum number of basic components of patterns and Lie transformations from only one sequence of images, by introducing the learnable shapeinvariant Lie group transformers as transformation components.

Lie Group Transformation Models for Predictive Video Coding

- Computer Science2011 Data Compression Conference
- 2011

A new method for modeling the temporal correlation in videos, based on local transforms realized by Lie group operators, leads to better rate-distortion performance at higher bit-rates, and competitive performance at lower bit-rate, compared to the standard prediction based on block-based motion estimation.

Natural Variation Transfer using Learned Manifold Operators

- Computer Science
- 2019

This work represents the manifold structure using a learned dictionary of generative operators and develops methods for using those operators for few-shot learning and realistic data generation.

Transformation Properties of Learned Visual Representations

- MathematicsICLR
- 2015

It is demonstrated in a model of rotating NORB objects that employs a latent representation of the non-commutative 3D rotation group SO(3) that is equivalent to a combination of the elementary irreducible representations.

## References

SHOWING 1-10 OF 28 REFERENCES

Learning the Lie Groups of Visual Invariance

- MathematicsNeural Computation
- 2007

This letter presents an unsupervised expectation-maximization algorithm for learning Lie transformation operators directly from image data containing examples of transformations, and shows that the learned operators can be used to both generate and estimate transformations in images, thereby providing a basis for achieving visual invariance.

Learning Lie Groups for Invariant Visual Perception

- Computer ScienceNIPS
- 1998

A Bayesian method for learning invariances based on Lie group theory and experimental results suggest that the proposed method can learn Lie group operators for handling reasonably large 1-D translations and 2-D rotations.

Learning to Represent Spatial Transformations with Factored Higher-Order Boltzmann Machines

- Computer ScienceNeural Computation
- 2010

A low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product, which allows efficient learning of transformations between larger image patches and demonstrates the learning of optimal filter pairs from various synthetic and real image sequences.

Learning transport operators for image manifolds

- MathematicsNIPS
- 2009

An unsupervised manifold learning algorithm that represents a surface through a compact description of operators that traverse it is described, applied to recover topological structure from low dimensional synthetic data, and to model local structure in how natural images change over time and scale.

Minimum Distance between Pattern Transformation Manifolds: Algorithm and Applications

- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2009

This paper focuses on a transformation-invariant distance measure that represents the minimum distance between the transformation manifolds spanned by patterns of interest, and proposes representing a pattern of interest as a linear combination of a few geometric functions extracted from a structured and redundant basis.

Lie Group Transformation Models for Predictive Video Coding

- Computer Science2011 Data Compression Conference
- 2011

A new method for modeling the temporal correlation in videos, based on local transforms realized by Lie group operators, leads to better rate-distortion performance at higher bit-rates, and competitive performance at lower bit-rate, compared to the standard prediction based on block-based motion estimation.

Bilinear models of natural images

- Computer ScienceElectronic Imaging
- 2007

Binear image models can be used to learn independent representations of the invariances, and their transformations, in natural image sequences that provide the foundation for learning higher-order feature representations that could serve as models of higher stages of processing in the cortex.

Learning Transformational Invariants from Natural Movies

- Computer ScienceNIPS
- 2008

A hierarchical, probabilistic model that learns to extract complex motion from movies of the natural environment that encoded transformational invariants, which are selective for the speed and direction of a moving pattern, but are invariant to its spatial structure.

Multiresolution Tangent Distance for Affine-invariant Classification

- Computer Science, MathematicsNIPS
- 1997

This work analyzes an invariant metric that has performed well for face or character recognition and studies its limitations when applied to regular images, showing that the most significant among these (convergence to local minima) can be drastically reduced by computing the distance in a multiresolution setting.

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

- Computer ScienceInternational Journal of Computer Vision
- 2004

A “subspace constancy assumption” is defined that allows techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image.