Inference in Deep Networks in High Dimensions

@article{Fletcher2018InferenceID,
  title={Inference in Deep Networks in High Dimensions},
  author={Alyson K. Fletcher and Sundeep Rangan},
  journal={2018 IEEE International Symposium on Information Theory (ISIT)},
  year={2018},
  pages={1884-1888}
}
  • A. FletcherS. Rangan
  • Published 20 June 2017
  • Computer Science
  • 2018 IEEE International Symposium on Information Theory (ISIT)
Deep generative networks provide a powerful tool for modeling complex data in a wide range of applications. In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs. Inference is also required for sampling during stochastic training of these generative models. This paper considers inference in a deep stochastic neural network where the parameters (e.g., weights, biases and activation functions) are… 

Figures from this paper

Inference With Deep Generative Priors in High Dimensions

This paper shows that the performance of ML-VAMP can be exactly predicted in a certain high-dimensional random limit, and provides a computationally efficient method for multi-layer inference with an exact performance characterization and testable conditions for optimality in the large-system limit.

Asymptotics of MAP Inference in Deep Networks

This work considers a recently-developed method, multilayer vector approximate message passing (ML-VAMP), to study MAP inference in deep networks and shows that the mean squared error of the ML- VAMP estimate can be exactly and rigorously characterized in a certain high-dimensional random limit.

Inference in Multi-Layer Networks with Matrix-Valued Unknowns

A unified approximation algorithm for both MAP and MMSE inference is proposed by extending a recently-developed Multi-Layer Vector Approximate Message Passing (ML-VAMP) algorithm to handle matrix-valued unknowns.

Matrix inference and estimation in multi-layer models

It is shown that the performance of the proposed multi-layer matrix vector approximate message passing algorithm can be exactly predicted in a certain random large-system limit, where the dimensions N × d of the unknown quantities grow as N → ∞ with d fixed.

Additivity of information in multilayer networks via additive Gaussian noise transforms

  • G. Reeves
  • Computer Science
    2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2017
This paper provides a new method for analyzing the fundamental limits of statistical inference in settings where the model is known and has close connections to free probability theory for random matrices.

Entropy and mutual information in models of deep neural networks

It is concluded that, in the proposed setting, the relationship between compression and generalization remains elusive and an experiment framework with generative models of synthetic datasets is proposed, on which deep neural networks are trained with a weight constraint designed so that the assumption in (i) is verified during learning.

Generalization Error of Generalized Linear Models in High Dimensions

This work provides a general framework to characterize the asymptotic generalization error for single-layer neural networks (i.e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems.

Mean-field inference methods for neural networks

  • Marylou Gabri'e
  • Computer Science
    Journal of Physics A: Mathematical and Theoretical
  • 2020
A selection of classical mean-field methods and recent progress relevant for inference in neural networks are reviewed, and the principles of derivations of high-temperature expansions, the replica method and message passing algorithms are reminded, highlighting their equivalences and complementarities.

Inverting Deep Generative models, One layer at a time

This paper shows that for the realizable case, single layer inversion can be performed exactly in polynomial time, by solving a linear program, and provides provable error bounds for different norms for reconstructing noisy observations.

The Spiked Matrix Model With Generative Priors

A rigorous expression for the performance of the Bayes-optimal estimator in the high-dimensional regime is established, and the statistical threshold for weak-recovery of the spike is identified, and it is shown that linearising the message passing algorithm yields a simple spectral method also achieving the optimal threshold for reconstruction.
...

References

SHOWING 1-10 OF 52 REFERENCES

Learning deep generative models

The aim of the thesis is to demonstrate that deep generative models that contain many layers of latent variables and millions of parameters can be learned efficiently, and that the learned high-level feature representations can be successfully applied in a wide spectrum of application domains, including visual object recognition, information retrieval, and classification and regression tasks.

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and

Deep Generative Stochastic Networks Trainable by Backprop

Theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders are provided and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood.

Additivity of information in multilayer networks via additive Gaussian noise transforms

  • G. Reeves
  • Computer Science
    2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2017
This paper provides a new method for analyzing the fundamental limits of statistical inference in settings where the model is known and has close connections to free probability theory for random matrices.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Expectation propagation for neural networks with sparsity-promoting priors

A novel approach for nonlinear regression using a two-layer neural network (NN) model structure with sparsity-favoring hierarchical priors on the network weights is proposed and a factorized posterior approximation is derived.

Deep Unsupervised Learning using Nonequilibrium Thermodynamics

This work develops an approach to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process, then learns a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data.

Deep Gaussian Processes for Regression using Approximate Expectation Propagation

A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.

A Probabilistic Framework for Deep Learning

It is demonstrated that max-sum inference in the DRMM yields an algorithm that exactly reproduces the operations in deep convolutional neural networks (DCNs), providing a first principles derivation.

Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

It is shown that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions.
...