# Regularization Theory of the Analytic Deep Prior Approach

@article{Arndt2022RegularizationTO, title={Regularization Theory of the Analytic Deep Prior Approach}, author={Clemens Arndt}, journal={ArXiv}, year={2022}, volume={abs/2205.06493} }

The analytic deep prior (ADP) approach was recently introduced for the theoretical analysis of deep image prior (DIP) methods with special network architectures. In this paper, we prove that ADP is in fact equivalent to classical variational Ivanov methods for solving ill-posed inverse problems. Besides, we propose a new variant which incorporates the strategy of early stopping into the ADP model. For both variants, we show how classical regularization properties (existence, stability…

## References

SHOWING 1-10 OF 35 REFERENCES

A Bayesian Perspective on the Deep Image Prior

- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019

It is shown that the deep image prior is asymptotically equivalent to a stationary Gaussian process prior in the limit as the number of channels in each layer of the network goes to infinity, and derive the corresponding kernel, which informs a Bayesian approach to inference.

The Spectral Bias of the Deep Image Prior

- Computer ScienceInt. J. Comput. Vis.
- 2022

A frequency-band correspondence measure is introduced to characterize the spectral bias of the deep image prior, where low-frequency image signals are learned faster and better than high-frequency counterparts.

Theoretical Foundations of Deep Learning via Sparse Representations: A Multilayer Sparse Model and Its Connection to Convolutional Neural Networks

- Computer ScienceIEEE Signal Processing Magazine
- 2018

Sparse representation theory (the authors shall refer to it as Sparseland) puts forward an emerging, highly effective, and universal model that describes data as a linear combination of few atoms taken from a dictionary of such fundamental elements.

Equivariant neural networks for inverse problems

- MathematicsInverse problems
- 2021

This work demonstrates that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach, and designs learned iterative methods in which the proximal operators are modelled as group Equivariant Convolutional neural networks.

A Generative Variational Model for Inverse Problems in Imaging

- MathematicsSIAM J. Math. Data Sci.
- 2022

This paper is concerned with the development, analysis and numerical realization of a novel variational model for the regularization of inverse problems in imaging. The proposed model is inspired by…

Solving inverse problems using data-driven models

- MathematicsActa Numerica
- 2019

This survey paper aims to give an account of some of the main contributions in data-driven inverse problems.

A double regularization approach for inverse problems with noisy data and inexact operator

- Mathematics
- 2013

In standard inverse problems, the task is to solve an operator equation from given noisy data. However, sometimes also the operator is not known exactly. Therefore we propose a method that allows…

Regularization and complexity control in feed-forward networks

- Mathematics
- 1995

It is argued that, for most practical applications, the technique of regularization should be the method of choice, based respectively on architecture selection, regularization, early stopping, and training with noise.

Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators

- Computer ScienceICLR
- 2020

A step towards demystifying this experimental phenomenon is taken by attributing this effect to particular architectural choices of convolutional networks, namely convolutions with fixed interpolating filters, and it is proved that early-stopped gradient descent denoises/regularizes.

A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging

- Mathematics, Computer ScienceJournal of Mathematical Imaging and Vision
- 2010

A first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and it can show linear convergence, i.e. O(ωN) for some ω∈(0,1), on smooth problems.