# Transformations between deep neural networks

@article{Bertalan2020TransformationsBD, title={Transformations between deep neural networks}, author={Tom S. Bertalan and Felix Dietrich and I. Kevrekidis}, journal={ArXiv}, year={2020}, volume={abs/2007.05646} }

We propose to test, and when possible establish, an equivalence between two different artificial neural networks by attempting to construct a data-driven transformation between them, using manifold-learning techniques. In particular, we employ diffusion maps with a Mahalanobis-like metric. If the construction succeeds, the two networks can be thought of as belonging to the same equivalence class.
We first discuss transformation functions between only the outputs of the two networks; we then… Expand

#### Figures and Tables from this paper

#### References

SHOWING 1-10 OF 24 REFERENCES

Generative Adversarial Networks

- Mathematics, Computer Science
- ArXiv
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a… Expand

Intrinsic Isometric Manifold Learning with Application to Localization

- Computer Science, Mathematics
- SIAM J. Imaging Sci.
- 2019

This work builds a new metric and proposes a method for its robust estimation by assuming mild statistical priors and by using artificial neural networks as a mechanism for metric regularization and parametrization, and shows successful application to unsupervised indoor localization in ad-hoc sensor networks. Expand

Parsimonious Representation of Nonlinear Dynamical Systems Through Manifold Learning: A Chemotaxis Case Study

- Mathematics, Physics
- 2015

Nonlinear manifold learning algorithms, such as diffusion maps, have been fruitfully applied in recent years to the analysis of large and complex data sets. However, such algorithms still encounter… Expand

A Geometric Approach to the Transport of Discontinuous Densities

- Mathematics, Physics
- SIAM/ASA J. Uncertain. Quantification
- 2020

Using ideas from attractor reconstruction in dynamical systems, it is demonstrated how additional information in the form of short histories of an observation process can help to recover the underlying manifold. Expand

Auto-Encoding Variational Bayes

- Mathematics, Computer Science
- ICLR
- 2014

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand

Wasserstein Generative Adversarial Networks

- Computer Science
- ICML
- 2017

This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Expand

Nearly Isometric Embedding by Relaxation

- Computer Science, Mathematics
- NIPS
- 2016

An embedding algorithm that directly computes, for any data embedding Y, a distortion loss Y, and iteratively updates Y in order to decrease it, and the superiority of this algorithm in obtaining low distortion embeddings is confirmed. Expand

Manifold learning for organizing unstructured sets of process observations.

- Physics, Computer Science
- Chaos
- 2020

This paper uses manifold learning to organize unstructured ensembles of observations ("trials") of a system's response surface, and demonstrates how this observation-based reconstruction naturally leads to informative transport maps between the input parameter space and output/state variable spaces. Expand

Local Kernels and the Geometric Structure of Data

- Mathematics
- 2014

We introduce a theory of local kernels, which generalize the kernels used in the standard diffusion maps construction of nonparametric modeling. We prove that evaluating a local kernel on a data set… Expand

Diffusion maps

- 2006

In this paper, we provide a framework based upon diffusion processes for finding meaningful geometric descriptions of data sets. We show that eigenfunctions of Markov matrices can be used to… Expand