• Corpus ID: 245144739

# Supervised learning of analysis-sparsity priors with automatic differentiation

@inproceedings{Ghanem2021SupervisedLO,
title={Supervised learning of analysis-sparsity priors with automatic differentiation},
author={Hashem Ghanem and Joseph Salmon and Nicolas Keriven and Samuel Vaiter},
year={2021}
}
• Published 15 December 2021
• Computer Science
Sparsity priors are commonly used in denoising and image reconstruction. For analysis-type priors, a dictionary defines a representation of signals that is likely to be sparse. In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error. This defines a hierarchical optimization problem, which can be cast as a bi-level optimization. Yet, this problem is unsolvable, as reconstructions and…
1 Citations

## Figures from this paper

• Mathematics
ArXiv
• 2022
. We leverage path diﬀerentiability and a recent result on nonsmooth implicit diﬀerentiation calculus to give suﬃcient conditions ensuring that the solution to a monotone inclusion problem will be

## References

SHOWING 1-10 OF 18 REFERENCES

• Computer Science
ArXiv
• 2020
The proposed method to denoise images using a variational formulation with a parametric, sparsity-promoting regularizer, where the parameters of the regularizer are learned to minimize the mean squared error of reconstructions on a training set of (ground truth image, measurement) pairs.
• Computer Science
• 2011
This paper introduces a novel approach to learn a dictionary in a sparsity-promoting analysis-type prior by casting the dictionary as a bilevel programming problem for which a gradient descent algorithm to reach a stationary point that might be a local minimizer.
• Computer Science
IEEE Transactions on Signal Processing
• 2015
This paper presents results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising, and establishes that the alternating algorithms are globally convergent to the set of local minimizers of the nonconvex transform learning problems.
• Computer Science
NIPS
• 2013
A unified approach that contains as particular cases models promoting sparse synthesis and analysis type of priors, and mixtures thereof, and a way of constructing feed-forward neural networks capable of approximating the learned models at a fraction of the computational cost of exact solvers is proposed.
• Computer Science
2011 19th European Signal Processing Conference
• 2011
This work derives a practical learning algorithm, based on projected subgradients, and demonstrates its ability to robustly recover a ground truth analysis operator, provided the training set is of sufficient size and a local optimality condition is derived.
• Mathematics
2006 14th European Signal Processing Conference
• 2006
This paper describes two prior classes, analysis-based and synthesis-based, and shows that although when reducing to the complete and under-complete formulations the two become equivalent, in their more interesting overcomplete formulation the two types depart.
• Computer Science
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
• 2012
A new learning framework which can use training data which is corrupted by noise and/or is only approximately cosparse, and an alternating optimization algorithm is introduced to learn a suitable analysis operator.
• Computer Science
2006 International Conference on Image Processing
• 2006
This work presents a simple and robust method for finding sparse representations in overcomplete transforms, based on minimization of the L0-norm, and strongly questions the equivalence of minimizing both norms in real conditions.
• Computer Science
IEEE Journal of Selected Topics in Signal Processing
• 2017
A first-order primal-dual algorithm is proposed to solve the problem of recovering a smooth graph signal from noisy samples taken on a subset of graph nodes that minimizes the total variation of the graph signal while controlling its global or node-wise empirical error.
• Computer Science
SIAM J. Imaging Sci.
• 2021
A general framework of discrete approximations of the total variation for image reconstruction problems unifies and extends several existing discretization schemes and proposes algorithms for learning discretizations of thetotal variation in order to achieve the best possible reconstruction quality for particular image reconstruction tasks.