Local conformal autoencoder for standardized data coordinates

  title={Local conformal autoencoder for standardized data coordinates},
  author={Erez Peterfreund and Ofir Lindenbaum and Felix Dietrich and Tom S. Bertalan and Matan Gavish and Ioannis G. Kevrekidis and Ronald R. Coifman},
  journal={Proceedings of the National Academy of Sciences of the United States of America},
  pages={30918 - 30927}
Significance A fundamental issue in empirical science is the ability to calibrate between different types of measurements/observations of the same phenomenon. This naturally suggests the selection of canonical variables, in the spirit of principal components, to enable matching/calibration among different observation modalities/instruments. We develop a method for extracting standardized, nonlinear, intrinsic coordinates from measured data, leading to a generalized isometric embedding of the… 

Figures from this paper

Learning low bending and low distortion manifold embeddings

The embedding into latent space is regularized via a loss function that promotes an as isometric and as flat embedding as possible and is shown to be consistent with a geometric loss functional defined directly on the embedding map.

Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

It is argued that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data, and a series of conformal building blocks are presented and applied in experiments to demonstrate that flows can model manifolds with tractable densities without sacrificing tractable likelihoods.

Differentiable Unsupervised Feature Selection

It is proved that the solution to the PRAE problem is equivalent to the solution of RAE, and it is demonstrated that using RAE for anomaly detection leads to state-of-the-art results on various benchmark datasets.

Computing committors in collective variables via Mahalanobis diffusion maps

This work adapts the diffusion map with Mahalanobis kernel proposed by Singer and Coifman (2008) for the SDE describing molecular dynamics in collective variables in which the diffusion matrix is position-dependent and, unlike the case considered, is not associated with a diffeomorphism.

Conformal Embedding Flows: Tractable Density Estimation on Learned Manifolds

This work argues that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data, and presents a series of conformal building blocks and demonstrates experimentally that flows can model manifolds with tractable densities without sacrificing tractable likelihoods.

Convergence rates of vector-valued local polynomial regression

It is proved that the optimal rates of convergence for a k-times smooth function f : Rd → R → R are also achievable by local-polynomial regression in case of a high dimensional target, given some assumptions on the noise distribution.

Semi-Supervised Source Localization in Reverberant Environments With Deep Generative Modeling

This paper presents the first approach to modeling the physics of acoustic propagation using deep generative modeling, and finds that VAE-SSL can outperform the conventional approaches and the CNN in label-limited scenarios.

Weakly Supervised Indoor Localization via Manifold Matching

A weakly supervised method that only requires the location of a small number of devices and yields an accuracy of a few meters, which is on par with fully supervised approaches and ideal for implementation in indoor localization systems.

Research on Recommendation of Big Data for Higher Education Based on Deep Learning

The results show that this intelligent recommendation method based on autoencoder is the most efficient method and can score different recommended articles, and the recommendation of different educational resources is realized.

Probabilistic Robust Autoencoders for Outlier Detection

Two probabilistic relaxations of RAE are proposed, which are differentiable and alleviate the need for a combinatorial search and show that PRAE can accurately remove outliers in a wide range of contamination levels.



Intrinsic Isometric Manifold Learning with Application to Localization

This work builds a new metric and proposes a method for its robust estimation by assuming mild statistical priors and by using artificial neural networks as a mechanism for metric regularization and parametrization, and shows successful application to unsupervised indoor localization in ad-hoc sensor networks.

Empirical intrinsic geometry for nonlinear modeling and time series filtering

EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series and enables us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework.

An Emergent Space for Distributed Data With Hidden Internal Order Through Manifold Learning

This work validates this “emergent space” reconstruction for time series sampled without space labels in known PDEs, and discusses how data-driven “spatial” coordinates can be extracted in ways invariant to the nature of the measuring instrument.

Non-linear dimensionality reduction: Riemannian metric estimation and the problem of geometric discovery

This work proposes a new paradigm that offers a guarantee, under reasonable assumptions, that any manifold learning algorithm will preserve the geometry of a data set, based on augmenting the output of embedding algorithms with geometric informatio n embodied in the Riemannian metric of the manifold.

Heterogeneous Datasets Representation and Learning using Diffusion Maps and Laplacian Pyramids

A method for representing and learning heterogeneous datasets by using diffusion maps for unifying and embedding heterogeneous dataset and by replacing the geometric harmonics with the Laplacian pyramid extension is proposed.

Nearly Isometric Embedding by Relaxation

An embedding algorithm that directly computes, for any data embedding Y, a distortion loss Y, and iteratively updates Y in order to decrease it, and the superiority of this algorithm in obtaining low distortion embeddings is confirmed.

Autoencoders, Unsupervised Learning, and Deep Architectures

  • P. Baldi
  • Computer Science
    ICML Unsupervised and Transfer Learning
  • 2012
The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory.

A global geometric framework for nonlinear dimensionality reduction.

An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.