• Corpus ID: 235254507

Inversion of Integral Models: a Neural Network Approach

  title={Inversion of Integral Models: a Neural Network Approach},
  author={{\'E}milie Chouzenoux and Cecile Della Valle and Jean-Christophe Pesquet},
We introduce a neural network architecture to solve inverse problems linked to a onedimensional integral operator. This architecture is built by unfolding a forward-backward algorithm derived from the minimization of an objective function which consists of the sum of a data-fidelity function and a Tikhonov-type regularization function. The robustness of this inversion method with respect to a perturbation of the input is theoretically analyzed. Ensuring robustness is consistent with inverse… 

Unrolled Variational Bayesian Algorithm for Image Blind Deconvolution

A variational Bayesian algorithm (VBA) for image blind deconvolution that incorporates smoothness priors on the unknown blur/image and possible affine constraints on the blur kernel is introduced.



Solving Inverse Problems With Deep Neural Networks - Robustness Included?

An extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems covers compressed sensing with Gaussian measurements as well as image recovery from Fourier and Radon measurements, including a real-world scenario for magnetic resonance imaging.

MoDL: Model-Based Deep Learning Architecture for Inverse Problems

This work introduces a model-based image reconstruction framework with a convolution neural network (CNN)-based regularization prior, and proposes to enforce data-consistency by using numerical optimization blocks, such as conjugate gradients algorithm within the network.

Solving ill-posed inverse problems using iterative deep neural networks

The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional to results in a gradient-like iterative scheme.

Deep unfolding of a proximal interior point method for image restoration

iRestNet is developed, a neural network architecture obtained by unfolding a proximal interior point algorithm that compares favorably with both state-of-the-art variational and machine learning methods in terms of image quality.

Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems

This paper studies the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network, and obtains state-of-the-art reconstruction results.

Neumann Networks for Linear Inverse Problems in Imaging

An end-to-end, data-driven method of solving inverse problems inspired by the Neumann series, which is called a Neumann network and outperforms traditional inverse problem solution methods, model-free deep learning approaches, and state-of-the-art unrolled iterative methods on standard datasets.

NETT: solving inverse problems with deep neural networks

A complete convergence analysis is established for the proposed NETT (network Tikhonov) approach to inverse problems, which considers nearly data-consistent solutions having small value of a regularizer defined by a trained neural network.

Learning Maximally Monotone Operators for Image Recovery

An operator regularization is performed, where a maximally monotone operator (MMO) is learned in a supervised manner, and a universal approximation theorem proving that nonexpansive NNs provide suitable models for the resolvent of a wide class of MMOs is provided.

Achieving robustness in classification using optimal transport with hinge regularization

A new framework for binary classification, based on optimal transport, is proposed, which integrates this Lipschitz constraint as a theoretical requirement and provides the expected guarantees in terms of robustness without any significant accuracy drop.

Deep Convolutional Neural Network for Inverse Problems in Imaging

The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation="LaTeX">$512\times 512$ </tex- math></inline- formula> image on the GPU.