• Corpus ID: 235606181

Learning Identity-Preserving Transformations on Data Manifolds

@article{Connor2021LearningIT,
  title={Learning Identity-Preserving Transformations on Data Manifolds},
  author={Marissa Connor and Kion Fallah and Christopher J. Rozell},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.12096}
}
Many machine learning techniques incorporate identity-preserving transformations into their models to generalize their performance to previously unseen data. These transformations are typically selected from a set of functions that are known to maintain the identity of an input when applied (e.g., rotation, translation, flipping, and scaling). However, there are many natural variations that cannot be labeled for supervision or defined through examination of the data. As suggested by the… 
Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics
TLDR
A new decomposed dynamical system model is proposed that represents complex non-stationary and nonlinear dynamics of time-series data as a sparse combination of simpler, more interpretable components.

References

SHOWING 1-10 OF 58 REFERENCES
Dreaming More Data: Class-dependent Distributions over Diffeomorphisms for Learned Data Augmentation
TLDR
This work aligns image pairs within each class under the assumption that the spatial transformation between images belongs to a large class of diffeomorphisms, and learns a class-specific probabilistic generative models of the transformations in a Riemannian submanifold of the Lie group of diffEomorphisms.
An Unsupervised Algorithm For Learning Lie Group Transformations
TLDR
Several theoretical contributions which allow Lie groups to be fit to high dimensional datasets are presented, reducing the computational complexity of parameter estimation to that of training a linear transformation model.
Learning the Lie Groups of Visual Invariance
TLDR
This letter presents an unsupervised expectation-maximization algorithm for learning Lie transformation operators directly from image data containing examples of transformations, and shows that the learned operators can be used to both generate and estimate transformations in images, thereby providing a basis for achieving visual invariance.
Variational Autoencoder with Learned Latent Structure
TLDR
The Variational Autoencoder with Learned Latent Structure (VAELLS) is introduced which incorporates a learnable manifold model into the latent space of a VAE and enables the integration of a latent manifold model which ensures that the authors' prior is well-matched to the data.
Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference
TLDR
This work proposes enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample as well as providing insights into how fake examples influence the semi-supervised learning procedure.
The Manifold Tangent Classifier
TLDR
A representation learning algorithm can be stacked to yield a deep architecture and it is shown how it builds a topological atlas of charts, each chart being characterized by the principal singular vectors of the Jacobian of a representation mapping.
Metrics for Deep Generative Models
TLDR
The method yields a principled distance measure, provides a tool for visual inspection of deep generative models, and an alternative to linear interpolation in latent space and can be applied for robot movement generalization using previously learned skills.
Higher Order Contractive Auto-Encoder
TLDR
A novel regularizer when training an autoencoder for unsupervised feature extraction yields representations that are significantly better suited for initializing deep architectures than previously proposed approaches, beating state-of-the-art performance on a number of datasets.
The Riemannian Geometry of Deep Generative Models
TLDR
The Riemannian geometry of these generated manifolds is investigated and it is shown how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point.
Non-Local Manifold Tangent Learning
We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse
...
...