# Inversion of Integral Models: a Neural Network Approach

@inproceedings{Chouzenoux2021InversionOI, title={Inversion of Integral Models: a Neural Network Approach}, author={{\'E}milie Chouzenoux and Cecile Della Valle and Jean-Christophe Pesquet}, year={2021} }

We introduce a neural network architecture to solve inverse problems linked to a onedimensional integral operator. This architecture is built by unfolding a forward-backward algorithm derived from the minimization of an objective function which consists of the sum of a data-fidelity function and a Tikhonov-type regularization function. The robustness of this inversion method with respect to a perturbation of the input is theoretically analyzed. Ensuring robustness is consistent with inverse…

## One Citation

### Unrolled Variational Bayesian Algorithm for Image Blind Deconvolution

- Computer ScienceArXiv
- 2021

A variational Bayesian algorithm (VBA) for image blind deconvolution that incorporates smoothness priors on the unknown blur/image and possible affine constraints on the blur kernel is introduced.

## References

SHOWING 1-10 OF 66 REFERENCES

### Solving Inverse Problems With Deep Neural Networks - Robustness Included?

- Computer ScienceIEEE transactions on pattern analysis and machine intelligence
- 2022

An extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems covers compressed sensing with Gaussian measurements as well as image recovery from Fourier and Radon measurements, including a real-world scenario for magnetic resonance imaging.

### MoDL: Model-Based Deep Learning Architecture for Inverse Problems

- Computer ScienceIEEE Transactions on Medical Imaging
- 2019

This work introduces a model-based image reconstruction framework with a convolution neural network (CNN)-based regularization prior, and proposes to enforce data-consistency by using numerical optimization blocks, such as conjugate gradients algorithm within the network.

### Solving ill-posed inverse problems using iterative deep neural networks

- MathematicsArXiv
- 2017

The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional to results in a gradient-like iterative scheme.

### Deep unfolding of a proximal interior point method for image restoration

- Computer ScienceInverse Problems
- 2020

iRestNet is developed, a neural network architecture obtained by unfolding a proximal interior point algorithm that compares favorably with both state-of-the-art variational and machine learning methods in terms of image quality.

### Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems

- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017

This paper studies the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network, and obtains state-of-the-art reconstruction results.

### Neumann Networks for Linear Inverse Problems in Imaging

- MathematicsIEEE Transactions on Computational Imaging
- 2020

An end-to-end, data-driven method of solving inverse problems inspired by the Neumann series, which is called a Neumann network and outperforms traditional inverse problem solution methods, model-free deep learning approaches, and state-of-the-art unrolled iterative methods on standard datasets.

### NETT: solving inverse problems with deep neural networks

- Mathematics, Computer ScienceInverse Problems
- 2020

A complete convergence analysis is established for the proposed NETT (network Tikhonov) approach to inverse problems, which considers nearly data-consistent solutions having small value of a regularizer defined by a trained neural network.

### Learning Maximally Monotone Operators for Image Recovery

- Mathematics, Computer ScienceSIAM J. Imaging Sci.
- 2021

An operator regularization is performed, where a maximally monotone operator (MMO) is learned in a supervised manner, and a universal approximation theorem proving that nonexpansive NNs provide suitable models for the resolvent of a wide class of MMOs is provided.

### Achieving robustness in classification using optimal transport with hinge regularization

- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021

A new framework for binary classification, based on optimal transport, is proposed, which integrates this Lipschitz constraint as a theoretical requirement and provides the expected guarantees in terms of robustness without any significant accuracy drop.

### Deep Convolutional Neural Network for Inverse Problems in Imaging

- MathematicsIEEE Transactions on Image Processing
- 2017

The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation="LaTeX">$512\times 512$ </tex- math></inline- formula> image on the GPU.