Corpus ID: 854852

Recurrent Inference Machines for Solving Inverse Problems

@article{Putzky2017RecurrentIM,
  title={Recurrent Inference Machines for Solving Inverse Problems},
  author={Patrick Putzky and Max Welling},
  journal={ArXiv},
  year={2017},
  volume={abs/1706.04008}
}
Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference. In the latter, the inference process is unrolled in time and interpreted as a recurrent neural network (RNN) which allows for joint learning of model and inference parameters with back-propagation through time. In this framework, the RNN architecture is directly derived from a hand-chosen inference algorithm, effectively limiting its… Expand
Recurrent machines for likelihood-free inference
TLDR
This work designs a recurrent inference machine that learns a sequence of parameter updates leading to good parameter estimates, without ever specifying some explicit notion of divergence between the simulated data and the real data distributions. Expand
Combining Generative and Discriminative Models for Hybrid Inference
TLDR
This work proposes a hybrid model that combines graphical inference with a learned inverse model, which is structure as in a graph neural network, while the iterative algorithm as a whole is formulated as a recurrent neural network by using cross-validation. Expand
Iterative Amortized Inference
TLDR
This work proposes iterative inference models, which learn to perform inference optimization through repeatedly encoding gradients, and demonstrates the inference optimization capabilities of these models and shows that they outperform standard inference models on several benchmark data sets of images and text. Expand
CosmicRIM : Reconstructing Early Universe by Combining Differentiable Simulations with Recurrent Inference Machines
Reconstructing the Gaussian initial conditions at the beginning of the Universe from the survey data in a forward modeling framework is a major challenge in cosmology. This requires solving a highExpand
Recurrent Localization Networks applied to the Lippmann-Schwinger Equation
TLDR
A novel machine learning approach for solving equations of the generalized Lippmann-Schwinger (L-S) type, which leverages the generalizability and computational efficiency of machine learning approaches, but also permits a physics-based interpretation. Expand
Recurrent Inference Machines as inverse problem solvers for MR relaxometry
TLDR
Recurrent Inference Machines are used to perform T1 and T2 mapping and it is shown that they can also be used to optimize non-linear problems and estimate relaxometry maps with high precision and accuracy. Expand
R3L: Connecting Deep Reinforcement Learning to Recurrent Neural Networks for Image Denoising via Residual Recovery
TLDR
It is demonstrated that the proposed R3L has better generalizability and robustness in image denoising when the estimated noise level varies, comparing to its counterparts using deterministic training, as well as various state-ofthe-art image Denoising algorithms. Expand
Solving ill-posed inverse problems using iterative deep neural networks
TLDR
The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional to results in a gradient-like iterative scheme. Expand
Data-driven Reconstruction of Gravitationally Lensed Galaxies Using Recurrent Inference Machines
We present a machine learning method for the reconstruction of the undistorted images of background sources in strongly lensed systems. This method treats the source as a pixelated image and utilizesExpand
Equivariant neural networks for inverse problems
TLDR
This work demonstrates that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach, and designs learned iterative methods in which the proximal operators are modelled as group Equivariant Convolutional neural networks. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 44 REFERENCES
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference andExpand
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Conditional Random Fields as Recurrent Neural Networks
TLDR
A new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced, and top results are obtained on the challenging Pascal VOC 2012 segmentation benchmark. Expand
Exploiting Inference for Approximate Parameter Learning in Discriminative Fields: An Empirical Study
TLDR
An approach for approximate maximum likelihood parameter learning in discriminative field models, which is based on approximating true expectations with simple piecewise constant functions constructed using inference techniques is presented. Expand
Learning to learn by gradient descent by gradient descent
TLDR
This paper shows how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Expand
Convex variational Bayesian inference for large scale generalized linear models
We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a genericExpand
Proximal Deep Structured Models
TLDR
A powerful deep structured model that is able to learn complex non-linear functions which encode the dependencies between continuous output variables and it is shown that inference in this model using proximal methods can be efficiently solved as a feed-foward pass of a special type of deep recurrent neural network. Expand
Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting
  • M. Wainwright
  • Mathematics, Computer Science
  • J. Mach. Learn. Res.
  • 2006
TLDR
The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. Expand
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
TLDR
This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. Expand
Cascades of Regression Tree Fields for Image Restoration
TLDR
A cascade model for image restoration that consists of a Gaussian CRF at each stage that is semi-parametric, i.e., it depends on the instance-specific parameters of the restoration problem, such as the blur kernel. Expand
...
1
2
3
4
5
...