Learning low bending and low distortion manifold embeddings

@article{Braunsmann2021LearningLB,
  title={Learning low bending and low distortion manifold embeddings},
  author={Juliane Braunsmann and Marko Rajkovi'c and Martin Rumpf and Benedikt Wirth},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year={2021},
  pages={4411-4419}
}
Autoencoders are a widespread tool in machine learning to transform high-dimensional data into a lower-dimensional representation which still exhibits the essential characteristics of the input. The encoder provides an embedding from the input data manifold into a latent space which may then be used for further processing. For instance, learning interpolation on the manifold may be simplified via the new manifold representation in latent space. The efficiency of such further processing heavily… Expand

Figures from this paper

References

SHOWING 1-10 OF 31 REFERENCES
Intrinsic Isometric Manifold Learning with Application to Localization
TLDR
This work builds a new metric and proposes a method for its robust estimation by assuming mild statistical priors and by using artificial neural networks as a mechanism for metric regularization and parametrization, and shows successful application to unsupervised indoor localization in ad-hoc sensor networks. Expand
DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling
TLDR
This paper uses the Siamese configuration to train a neural network to solve the problem of least squares multidimensional scaling for generating maps that approximately preserve geodesic distances and shows a significantly improved local and nonlocal generalization of the isometric mapping. Expand
The Riemannian Geometry of Deep Generative Models
TLDR
The Riemannian geometry of these generated manifolds is investigated and it is shown how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Expand
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
TLDR
It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Expand
Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer
TLDR
This paper proposes a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. Expand
Local conformal autoencoder for standardized data coordinates
TLDR
A method for extracting standardized, nonlinear, intrinsic coordinates from measured data, leading to a generalized isometric embedding of the observations, obtained using LOCA, which is an algorithm that learns to rectify deformations by using a local z-scoring procedure, while preserving relevant geometric information. Expand
Manifold Learning Theory and Applications
TLDR
Comprehensive in its coverage, this pioneering work explores this novel modality from algorithm creation to successful implementation of manifold learning, offering examples of applications in medical, biometrics, multimedia, and computer vision. Expand
Image Manifolds which are Isometric to Euclidean Space
  • D. Donoho, C. Grimes
  • Mathematics, Computer Science
  • Journal of Mathematical Imaging and Vision
  • 2005
TLDR
This paper considers a special kind of image data: families of images generated by articulation of one or several objects in a scene; their lack of differentiability when the images have edges is studied, and it is shown that there exists a natural renormalization of geodesic distance which yields a well-defined metric. Expand
Extracting and composing robust features with denoising autoencoders
TLDR
This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. Expand
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
...
1
2
3
4
...