# Recurrent Inference Machines for Solving Inverse Problems

@article{Putzky2017RecurrentIM, title={Recurrent Inference Machines for Solving Inverse Problems}, author={Patrick Putzky and Max Welling}, journal={ArXiv}, year={2017}, volume={abs/1706.04008} }

Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference. In the latter, the inference process is unrolled in time and interpreted as a recurrent neural network (RNN) which allows for joint learning of model and inference parameters with back-propagation through time. In this framework, the RNN architecture is directly derived from a hand-chosen inference algorithm, effectively limiting its… Expand

#### 79 Citations

Recurrent machines for likelihood-free inference

- Mathematics, Computer Science
- ArXiv
- 2018

This work designs a recurrent inference machine that learns a sequence of parameter updates leading to good parameter estimates, without ever specifying some explicit notion of divergence between the simulated data and the real data distributions. Expand

Combining Generative and Discriminative Models for Hybrid Inference

- Computer Science, Mathematics
- NeurIPS
- 2019

This work proposes a hybrid model that combines graphical inference with a learned inverse model, which is structure as in a graph neural network, while the iterative algorithm as a whole is formulated as a recurrent neural network by using cross-validation. Expand

Iterative Amortized Inference

- Computer Science, Mathematics
- ICML
- 2018

This work proposes iterative inference models, which learn to perform inference optimization through repeatedly encoding gradients, and demonstrates the inference optimization capabilities of these models and shows that they outperform standard inference models on several benchmark data sets of images and text. Expand

CosmicRIM : Reconstructing Early Universe by Combining Differentiable Simulations with Recurrent Inference Machines

- Physics
- 2021

Reconstructing the Gaussian initial conditions at the beginning of the Universe from the survey data in a forward modeling framework is a major challenge in cosmology. This requires solving a high… Expand

Recurrent Localization Networks applied to the Lippmann-Schwinger Equation

- Physics, Computer Science
- Computational Materials Science
- 2021

A novel machine learning approach for solving equations of the generalized Lippmann-Schwinger (L-S) type, which leverages the generalizability and computational efficiency of machine learning approaches, but also permits a physics-based interpretation. Expand

Recurrent Inference Machines as inverse problem solvers for MR relaxometry

- Medicine, Engineering
- Medical image analysis
- 2021

Recurrent Inference Machines are used to perform T1 and T2 mapping and it is shown that they can also be used to optimize non-linear problems and estimate relaxometry maps with high precision and accuracy. Expand

R3L: Connecting Deep Reinforcement Learning to Recurrent Neural Networks for Image Denoising via Residual Recovery

- Computer Science, Engineering
- ArXiv
- 2021

It is demonstrated that the proposed R3L has better generalizability and robustness in image denoising when the estimated noise level varies, comparing to its counterparts using deterministic training, as well as various state-ofthe-art image Denoising algorithms. Expand

Solving ill-posed inverse problems using iterative deep neural networks

- Computer Science, Mathematics
- ArXiv
- 2017

The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional to results in a gradient-like iterative scheme. Expand

Data-driven Reconstruction of Gravitationally Lensed Galaxies Using Recurrent Inference Machines

- Physics
- 2019

We present a machine learning method for the reconstruction of the undistorted images of background sources in strongly lensed systems. This method treats the source as a pixelated image and utilizes… Expand

Equivariant neural networks for inverse problems

- Medicine, Physics
- Inverse problems
- 2021

This work demonstrates that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach, and designs learned iterative methods in which the proximal operators are modelled as group Equivariant Convolutional neural networks. Expand

#### References

SHOWING 1-10 OF 44 REFERENCES

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

- Computer Science, Mathematics
- ICML
- 2014

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and… Expand

Auto-Encoding Variational Bayes

- Mathematics, Computer Science
- ICLR
- 2014

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand

Conditional Random Fields as Recurrent Neural Networks

- Computer Science
- 2015 IEEE International Conference on Computer Vision (ICCV)
- 2015

A new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced, and top results are obtained on the challenging Pascal VOC 2012 segmentation benchmark. Expand

Exploiting Inference for Approximate Parameter Learning in Discriminative Fields: An Empirical Study

- Computer Science
- EMMCVPR
- 2005

An approach for approximate maximum likelihood parameter learning in discriminative field models, which is based on approximating true expectations with simple piecewise constant functions constructed using inference techniques is presented. Expand

Learning to learn by gradient descent by gradient descent

- Computer Science
- NIPS
- 2016

This paper shows how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Expand

Convex variational Bayesian inference for large scale generalized linear models

- Mathematics, Computer Science
- ICML '09
- 2009

We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a generic… Expand

Proximal Deep Structured Models

- Computer Science, Mathematics
- NIPS
- 2016

A powerful deep structured model that is able to learn complex non-linear functions which encode the dependencies between continuous output variables and it is shown that inference in this model using proximal methods can be efficiently solved as a feed-foward pass of a special type of deep recurrent neural network. Expand

Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting

- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 2006

The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. Expand

Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion

- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2010

This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. Expand

Cascades of Regression Tree Fields for Image Restoration

- Mathematics, Computer Science
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- 2016

A cascade model for image restoration that consists of a Gaussian CRF at each stage that is semi-parametric, i.e., it depends on the instance-specific parameters of the restoration problem, such as the blur kernel. Expand