Supervised learning of analysis-sparsity priors with automatic differentiation
@inproceedings{Ghanem2021SupervisedLO, title={Supervised learning of analysis-sparsity priors with automatic differentiation}, author={Hashem Ghanem and Joseph Salmon and Nicolas Keriven and Samuel Vaiter}, year={2021} }
Sparsity priors are commonly used in denoising and image reconstruction. For analysis-type priors, a dictionary defines a representation of signals that is likely to be sparse. In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error. This defines a hierarchical optimization problem, which can be cast as a bi-level optimization. Yet, this problem is unsolvable, as reconstructions and…
One Citation
Differentiating Nonsmooth Solutions to Parametric Monotone Inclusion Problems
- MathematicsArXiv
- 2022
. We leverage path differentiability and a recent result on nonsmooth implicit differentiation calculus to give sufficient conditions ensuring that the solution to a monotone inclusion problem will be…
References
SHOWING 1-10 OF 18 REFERENCES
Supervised Learning of Sparsity-Promoting Regularizers for Denoising
- Computer ScienceArXiv
- 2020
The proposed method to denoise images using a variational formulation with a parametric, sparsity-promoting regularizer, where the parameters of the regularizer are learned to minimize the mean squared error of reconstructions on a training set of (ground truth image, measurement) pairs.
Learning Analysis Sparsity Priors
- Computer Science
- 2011
This paper introduces a novel approach to learn a dictionary in a sparsity-promoting analysis-type prior by casting the dictionary as a bilevel programming problem for which a gradient descent algorithm to reach a stationary point that might be a local minimizer.
$\ell_{0}$ Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees
- Computer ScienceIEEE Transactions on Signal Processing
- 2015
This paper presents results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising, and establishes that the alternating algorithms are globally convergent to the set of local minimizers of the nonconvex transform learning problems.
Supervised Sparse Analysis and Synthesis Operators
- Computer ScienceNIPS
- 2013
A unified approach that contains as particular cases models promoting sparse synthesis and analysis type of priors, and mixtures thereof, and a way of constructing feed-forward neural networks capable of approximating the learned models at a fraction of the computational cost of exact solvers is proposed.
Analysis operator learning for overcomplete cosparse representations
- Computer Science2011 19th European Signal Processing Conference
- 2011
This work derives a practical learning algorithm, based on projected subgradients, and demonstrates its ability to robustly recover a ground truth analysis operator, provided the training set is of sufficient size and a local optimality condition is derived.
Analysis versus synthesis in signal priors
- Mathematics2006 14th European Signal Processing Conference
- 2006
This paper describes two prior classes, analysis-based and synthesis-based, and shows that although when reducing to the complete and under-complete formulations the two become equivalent, in their more interesting overcomplete formulation the two types depart.
Noise aware analysis operator learning for approximately cosparse signals
- Computer Science2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2012
A new learning framework which can use training data which is corrupted by noise and/or is only approximately cosparse, and an alternating optimization algorithm is introduced to learn a suitable analysis operator.
L0-Norm-Based Sparse Representation Through Alternate Projections
- Computer Science2006 International Conference on Image Processing
- 2006
This work presents a simple and robust method for finding sparse representations in overcomplete transforms, based on minimization of the L0-norm, and strongly questions the equivalence of minimizing both norms in real conditions.
Graph Signal Recovery via Primal-Dual Algorithms for Total Variation Minimization
- Computer ScienceIEEE Journal of Selected Topics in Signal Processing
- 2017
A first-order primal-dual algorithm is proposed to solve the problem of recovering a smooth graph signal from noisy samples taken on a subset of graph nodes that minimizes the total variation of the graph signal while controlling its global or node-wise empirical error.
Learning Consistent Discretizations of the Total Variation
- Computer ScienceSIAM J. Imaging Sci.
- 2021
A general framework of discrete approximations of the total variation for image reconstruction problems unifies and extends several existing discretization schemes and proposes algorithms for learning discretizations of thetotal variation in order to achieve the best possible reconstruction quality for particular image reconstruction tasks.