• Corpus ID: 239768507

Learning convex regularizers satisfying the variational source condition for inverse problems

@article{Mukherjee2021LearningCR,
  title={Learning convex regularizers satisfying the variational source condition for inverse problems},
  author={Subhadip Mukherjee and Carola-Bibiane Sch{\"o}nlieb and Martin Burger},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.12520}
}
Variational regularization has remained one of the most successful approaches for reconstruction in imaging inverse problems for several decades. With the emergence and astonishing success of deep learning in recent years, a considerable amount of research has gone into data-driven modeling of the regularizer in the variational setting. Our work extends a recently proposed method, referred to as adversarial convex regularization (ACR), that seeks to learn data-driven convex regularizers via… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 13 REFERENCES
Adversarial Regularizers in Inverse Problems
TLDR
This work proposes a new framework for applying data-driven approaches to inverse problems, using a neural network as a regularization functional, that can be applied even if only unsupervised training data is available.
Total Deep Variation for Linear Inverse Problems
TLDR
This paper proposes a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning and casts the learning problem as a discrete sampled optimal control problem, for which the adjoint state equations and an optimality condition are derived.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Convergence rates of convex variational regularization
The aim of this paper is to provide quantitative estimates for the minimizers of non-quadratic regularization problems in terms of the regularization parameter, respectively the noise level. As usual
Deep Convolutional Neural Network for Inverse Problems in Imaging
TLDR
The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation="LaTeX">$512\times 512$ </tex- math></inline- formula> image on the GPU.
NETT: Solving Inverse Problems with Deep Neural Networks
TLDR
A complete convergence analysis is established for the proposed NETT (Network Tikhonov) approach to inverse problems, which considers data consistent solutions having small value of a regularizer defined by a trained neural network, and proposes a possible strategy for training the regularizer.
Modern regularization methods for inverse problems
TLDR
The aim of this paper is to provide a reasonably comprehensive overview of this shift towards modern nonlinear regularization methods, including their analysis, applications and issues for future research.
Input Convex Neural Networks
This paper presents the input convex neural network architecture. These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the
Solving inverse problems using data-driven models
TLDR
This survey paper aims to give an account of some of the main contributions in data-driven inverse problems.
Regularization Methods in Banach Spaces
TLDR
This work investigates regularization methods aimed at finding stable approximate solutions for linear and nonlinear operator equations in Banach spaces using general Lp-norms or the BV-norm.
...
1
2
...