# Generalization error bounds for DECONET: a deep unfolding network for analysis Compressive Sensing

@article{Kouni2022GeneralizationEB,
title={Generalization error bounds for DECONET: a deep unfolding network for analysis Compressive Sensing},
author={Vasiliki (Vicky) Kouni},
journal={ArXiv},
year={2022},
volume={abs/2205.07050}
}
In this paper, we propose a new deep unfolding network – based on a state-of-the-art optimization algorithm – for analysis Compressed Sensing. The proposed network called Decoding Network (DECONET) implements a decoder that reconstructs vectors from their incomplete, noisy measurements. Moreover, DECONET jointly learns a redundant analysis operator for sparsiﬁcation, which is shared across the layers of DECONET. We study the generalization ability of DECONET. Towards that end, we ﬁrst estimate…

## References

SHOWING 1-10 OF 71 REFERENCES
• Computer Science
ICASSP
• 2022
A new deep unfolding neural network based on the ADMM algorithm for analysis Compressed Sensing that jointly learns a redundant analysis operator for sparsification and reconstructs the signal of interest.
Deep Unfolding With Weighted ℓ₂ Minimization for Compressive Sensing
• Computer Science
IEEE Internet of Things Journal
• 2021
A new recovery guarantee of the unified CS reconstruction model-weighted $\ell _{1}$ minimization (WL1M) is derived, which indicates universal priors could hardly lead to the optimal selection of the weights.
Generalization Error Bounds for Iterative Recovery Algorithms Unfolded as Neural Networks
• Computer Science
ArXiv
• 2021
This work introduces a general class of neural networks suitable for sparse reconstruction from few linear measurements, and derives generalization bounds by analyzing the Rademacher complexity of hypothesis classes consisting of such deep networks, that also take into account the thresholding parameters.
A Robust Deep Unfolded Network for Sparse Signal Recovery from Noisy Binary Measurements
• Computer Science
• 2020
The proposed DeepFPC-2 network, designed by unfolding the iterations of the fixed-point continuation (FPC) algorithm with one-sided 2-norm, shows higher signal reconstruction accuracy and convergence speed than the traditional FPC- `2 algorithm.
Theoretical Linear Convergence of Deep Unfolding Network for Block-Sparse Signal Recovery
• Computer Science
• 2021
The linear convergence rate of the proposed block-sparse reconstruction network, Ada-BlockLISTA, is proved, which theoretically guarantees exact recovery for a potentially higher sparsity level based on underlyingblock structure.
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
• 2018
This paper proposes a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general  norm CS reconstruction model and develops an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms.
AMP-Net: Denoising-Based Deep Unfolding for Compressive Image Sensing
• Computer Science
IEEE Transactions on Image Processing
• 2021
The proposed AMP-Net has better reconstruction accuracy than other state-of-the-art methods with high reconstruction speed and a small number of network parameters and is established by unfolding the iterative denoising process of the well-known approximate message passing algorithm.
Compressive Sensing and Neural Networks from a Statistical Learning Perspective
• Computer Science
• 2020
This chapter discusses and presents a generalization error analysis for a class of neural networks suitable for sparse reconstruction from few linear measurements, which is based on bounding the Rademacher complexity of hypothesis classes consisting of such deep networks via Dudley's integral.
AMP-Inspired Deep Networks for Sparse Linear Inverse Problems
• Computer Science
IEEE Transactions on Signal Processing
• 2017
This paper proposes two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction.