Regularization Theory of the Analytic Deep Prior Approach

  title={Regularization Theory of the Analytic Deep Prior Approach},
  author={Clemens Arndt},
The analytic deep prior (ADP) approach was recently introduced for the theoretical analysis of deep image prior (DIP) methods with special network architectures. In this paper, we prove that ADP is in fact equivalent to classical variational Ivanov methods for solving ill-posed inverse problems. Besides, we propose a new variant which incorporates the strategy of early stopping into the ADP model. For both variants, we show how classical regularization properties (existence, stability… 

Figures from this paper


A Bayesian Perspective on the Deep Image Prior
It is shown that the deep image prior is asymptotically equivalent to a stationary Gaussian process prior in the limit as the number of channels in each layer of the network goes to infinity, and derive the corresponding kernel, which informs a Bayesian approach to inference.
The Spectral Bias of the Deep Image Prior
A frequency-band correspondence measure is introduced to characterize the spectral bias of the deep image prior, where low-frequency image signals are learned faster and better than high-frequency counterparts.
Theoretical Foundations of Deep Learning via Sparse Representations: A Multilayer Sparse Model and Its Connection to Convolutional Neural Networks
Sparse representation theory (the authors shall refer to it as Sparseland) puts forward an emerging, highly effective, and universal model that describes data as a linear combination of few atoms taken from a dictionary of such fundamental elements.
Equivariant neural networks for inverse problems
This work demonstrates that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach, and designs learned iterative methods in which the proximal operators are modelled as group Equivariant Convolutional neural networks.
A Generative Variational Model for Inverse Problems in Imaging
This paper is concerned with the development, analysis and numerical realization of a novel variational model for the regularization of inverse problems in imaging. The proposed model is inspired by
Solving inverse problems using data-driven models
This survey paper aims to give an account of some of the main contributions in data-driven inverse problems.
A double regularization approach for inverse problems with noisy data and inexact operator
In standard inverse problems, the task is to solve an operator equation from given noisy data. However, sometimes also the operator is not known exactly. Therefore we propose a method that allows
Regularization and complexity control in feed-forward networks
It is argued that, for most practical applications, the technique of regularization should be the method of choice, based respectively on architecture selection, regularization, early stopping, and training with noise.
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
A step towards demystifying this experimental phenomenon is taken by attributing this effect to particular architectural choices of convolutional networks, namely convolutions with fixed interpolating filters, and it is proved that early-stopped gradient descent denoises/regularizes.
A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging
  • A. Chambolle, T. Pock
  • Mathematics, Computer Science
    Journal of Mathematical Imaging and Vision
  • 2010
A first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and it can show linear convergence, i.e. O(ωN) for some ω∈(0,1), on smooth problems.