• Corpus ID: 173990640

Topological Autoencoders

@inproceedings{Moor2020TopologicalA,
  title={Topological Autoencoders},
  author={Michael Moor and Max Horn and Bastian Alexander Rieck and Karsten M. Borgwardt},
  booktitle={ICML},
  year={2020}
}
We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is… 

Figures and Tables from this paper

Challenging Euclidean Topological Autoencoders
TLDR
In the experiments on real-world image datasets, this work finds that the Euclidean formulation of TopoAE is surprisingly competitive with more elaborate, perceptually-inspired image distances.
Extendable and invertible manifold learning with geometry regularized autoencoders
TLDR
This work presents a new method for integrating both approaches to manifold learning and autoencoders by incorporating a geometric regularization term in the bottleneck of the autoencoder, based on the diffusion potential distances from the recently-proposed PHATE visualization method.
ToFU: Topology functional units for deep learning
TLDR
ToFU is proposed, a new trainable neural network unit with a persistence diagram dissimilarity function as its activation that measures and learns the topology of data to leverage it in machine learning tasks.
Topologically Regularized Data Embeddings
TLDR
A new set of topological losses is introduced, and their usage as a way for topologically regularizing data embeddings to naturally represent a prespecified model is proposed.
Local distance preserving auto-encoders using Continuous k-Nearest Neighbours graphs
TLDR
This paper introduces several auto-encoder models that preserve local distances when mapping from the data space to the latent space using a local distance-preserving loss that is based on the continuous k-nearest neighbours graph which is known to capture topological features at all scales simultaneously.
Markov-Lipschitz Deep Learning
We propose a novel framework, called Markov-Lipschitz deep learning (MLDL), to tackle geometric deterioration caused by collapse, twisting, or crossing in vector-based neural network transformations
Invertible Manifold Learning for Dimension Reduction
TLDR
The proposed invertible manifold learning (inv-ML) achieves better invertable NLDR in comparison with typical existing methods but also reveals the characteristics of the learned manifolds through linear interpolation in latent space.
Neighborhood Reconstructing Autoencoders
TLDR
A new graph-based autoencoder called NRAE is proposed, which improves both overfitting and local connectivity in the learned manifold, in some cases by significant margins.
PLLay: Efficient Topological Layer based on Persistent Landscapes
TLDR
A task-optimal structure of PLLay is learned during training via backpropagation, without requiring any input featurization or data preprocessing, and the proposed layer is robust against noise and outliers through a stability analysis.
Parametric UMAP Embeddings for Representation and Semisupervised Learning
TLDR
This work demonstrates that parametric UMAP performs comparably to its nonparametric counterpart while conferring the benefit of a learned parametric mapping, and explores UMAP as a regularization, constraining the latent distribution of autoencoders, parametrically varying global structure preservation, and improving classifier accuracy for semisupervised learning by capturing structure in unlabeled data.
...
...

References

SHOWING 1-10 OF 69 REFERENCES
Connectivity-Optimized Representation Learning via Persistent Homology
TLDR
This work controls the connectivity of an autoencoder's latent space via a novel type of loss, operating on information from persistent homology, which is differentiable and presents a theoretical analysis of the properties induced by the loss.
A Topological Regularizer for Classifiers via Persistent Homology
TLDR
This paper proposes to enforce the structural simplicity of the classification boundary by regularizing over its topological complexity by measuring the importance of topological features in a meaningful manner, and provides a direct control over spurious topological structures.
Deep Learning with Topological Signatures
TLDR
This work proposes a technique that enables us to input topological signatures to deep neural networks and learn a task-optimal representation during training, realized as a novel input layer with favorable theoretical properties.
Graph Filtration Learning
TLDR
An approach to learning with graph-structured data in the problem domain of graph classification is proposed, and a novel type of readout operation to aggregate node features into a graph-level representation is presented.
Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology
TLDR
This work proposes neural persistence, a complexity measure for neural network architectures based on topological data analysis on weighted stratified graphs and derives a neural persistence-based stopping criterion that shortens the training process while achieving comparable accuracies as early stopping based on validation loss.
Homology-Preserving Dimensionality Reduction via Manifold Landmarking and Tearing
TLDR
Inspired by recent work in topological data analysis, this work is on the quest for a dimensionality reduction technique that achieves the criterion of homology preservation, a generalized version of topology preservation.
A stable multi-scale kernel for topological machine learning
TLDR
This work designs a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data that is positive definite and proves its stability with respect to the 1-Wasserstein distance.
Persistence Images: A Stable Vector Representation of Persistent Homology
TLDR
This work converts a PD to a finite-dimensional vector representation which it is called a persistence image, and proves the stability of this transformation with respect to small perturbations in the inputs.
On the Local Behavior of Spaces of Natural Images
TLDR
A theoretical model for the high-density 2-dimensional submanifold of ℳ showing that it has the topology of the Klein bottle and a polynomial representation is used to give coordinatization to various subspaces ofℳ.
PersLay: A Neural Network Layer for Persistence Diagrams and New Graph Topological Signatures
TLDR
This work shows how graphs can be encoded by (extended) persistence diagrams in a provably stable way and proposes a general and versatile framework for learning vectorizations of persistence diagrams, which encompasses most of the vectorization techniques used in the literature.
...
...