Convolutional Proximal Neural Networks and Plug-and-Play Algorithms

@article{Hertrich2021ConvolutionalPN,
  title={Convolutional Proximal Neural Networks and Plug-and-Play Algorithms},
  author={Johannes Hertrich and Sebastian Neumayer and Gabriele Steidl},
  journal={ArXiv},
  year={2021},
  volume={abs/2011.02281}
}

Figures and Tables from this paper

Gradient Step Denoiser for convergent Plug-and-Play
TLDR
This work proposes a new type of PnP method, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step on a functional parameterized by a deep neural network.
Learning Lipschitz-Controlled Activation Functions in Neural Networks for Plug-and-Play Image Reconstruction Methods
TLDR
It is shown that averaged denoising operators built from 1-Lipschitz deep spline networks consistently outperform those built from 0.99% ReLU networks and “slope normalization” is introduced to control the Lipschitzer constant of these activation functions.
Approximation of Lipschitz Functions using Deep Spline Neural Networks
TLDR
It is proved that this choice of learnable spline activation functions with at least 3 linear regions is optimal among all component-wise 1 -Lipschitz activation functions in the sense that no other weight constrained architecture can approximate a larger class of functions.
Generalization Error Bounds for Iterative Recovery Algorithms Unfolded as Neural Networks
TLDR
This work introduces a general class of neural networks suitable for sparse reconstruction from few linear measurements, and derives generalization bounds by analyzing the Rademacher complexity of hypothesis classes consisting of such deep networks, that also take into account the thresholding parameters.
WPPNets and WPPFlows: The Power of Wasserstein Patch Priors for Superresolution
TLDR
This paper proposes to learn two kinds of neural networks in an unsupervised way based on WPP loss functions, and shows how convolutional neural networks (CNNs) can be incorporated.
PatchNR: Learning from Small Data by Patch Normalizing Flow Regularization
TLDR
By investigating the distribution of patches versus those of the whole image class, it is proved that the variational model is indeed a MAP approach and the model can be generalized to conditional patchNRs, if additional supervised information is available.
Learning Maximally Monotone Operators for Image Recovery
TLDR
An operator regularization is performed, where a maximally monotone operator (MMO) is learned in a supervised manner, and a universal approximation theorem proving that nonexpansive NNs provide suitable models for the resolvent of a wide class of MMOs is provided.
WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution
TLDR
WPPNets are introduced, which are CNNs trained by a new unsupervised loss function for image superresolution of materials microstructures which enables them to use in real-world applications, where neither a large database of registered data nor the exact forward operator are given.
Inertial stochastic PALM and applications in machine learning
  • J. Hertrich, G. Steidl
  • Computer Science, Mathematics
    Sampling Theory, Signal Processing, and Data Analysis
  • 2022
TLDR
An inertial variant of a stochastic PALM algorithm with variance-reduced gradient estimator is derived, called iSPALM, and linear convergence of the algorithm under certain assumptions is proved.
Generalized Normalizing Flows via Markov Chains
TLDR
This chapter considers stochastic normalizing flows as a pair of Markov chains fulfilling some properties and shows how many state-of-theart models for data generation fit into this framework.
...
...

References

SHOWING 1-10 OF 66 REFERENCES
Energy Dissipation with Plug-and-Play Priors
TLDR
A combination of both approaches is considered by projecting the outputs of a plug-and-play denoising network onto the cone of descent directions to a given energy, showing improvements compared to classical energy minimization methods while still having provable convergence guarantees.
Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems
TLDR
This paper studies the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network, and obtains state-of-the-art reconstruction results.
Scene-Adapted plug-and-play algorithm with convergence guarantees
TLDR
This paper plugs a state-of-the-art denoiser, based on a Gaussian mixture model, in the iterations of an alternating direction method of multipliers and proves the algorithm is guaranteed to converge.
Regularisation of neural networks by enforcing Lipschitz continuity
TLDR
The technique is used to formulate training a neural network with a bounded Lipschitz constant as a constrained optimisation problem that can be solved using projected stochastic gradient methods and shows that the performance of the resulting models exceeds that of models trained with other common regularisers.
One Network to Solve Them All — Solving Linear Inverse Problems Using Deep Projection Models
TLDR
This work proposes a general framework to train a single deep neural network that solves arbitrary linear inverse problems and demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting.
Deep Neural Network Structures Solving Variational Inequalities
TLDR
It is shown that the limit of the resulting process solves a variational inequality which, in general, does not derive from a minimization problem.
An Online Plug-and-Play Algorithm for Regularized Image Reconstruction
TLDR
A new online PnP algorithm based on the proximal gradient method (PGM) is introduced, which uses only a subset of measurements at every iteration, which makes it scalable to very large datasets.
Building Firmly Nonexpansive Convolutional Neural Networks
TLDR
This paper develops an optimization algorithm that can be incorporated in the training of a network to ensure the nonexpansiveness of its convolutional layers and applies it to train a CNN for an image denoising task and shows its effectiveness through simulations.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Variational Networks: An Optimal Control Approach to Early Stopping Variational Methods for Image Restoration
TLDR
A nonlinear spectral analysis of the gradient of the learned regularizer gives enlightening insights into the different regularization properties, and the development of first- and second-order conditions to verify optimal stopping time is developed.
...
...