Generalization error bounds for DECONET: a deep unfolding network for analysis Compressive Sensing

@article{Kouni2022GeneralizationEB,
  title={Generalization error bounds for DECONET: a deep unfolding network for analysis Compressive Sensing},
  author={Vasiliki (Vicky) Kouni},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.07050}
}
In this paper, we propose a new deep unfolding network – based on a state-of-the-art optimization algorithm – for analysis Compressed Sensing. The proposed network called Decoding Network (DECONET) implements a decoder that reconstructs vectors from their incomplete, noisy measurements. Moreover, DECONET jointly learns a redundant analysis operator for sparsification, which is shared across the layers of DECONET. We study the generalization ability of DECONET. Towards that end, we first estimate… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 71 REFERENCES
ADMM-DAD net: a deep unfolding network for analysis compressed sensing
TLDR
A new deep unfolding neural network based on the ADMM algorithm for analysis Compressed Sensing that jointly learns a redundant analysis operator for sparsification and reconstructs the signal of interest.
Deep Unfolding With Weighted ℓ₂ Minimization for Compressive Sensing
TLDR
A new recovery guarantee of the unified CS reconstruction model-weighted $\ell _{1}$ minimization (WL1M) is derived, which indicates universal priors could hardly lead to the optimal selection of the weights.
Generalization Error Bounds for Iterative Recovery Algorithms Unfolded as Neural Networks
TLDR
This work introduces a general class of neural networks suitable for sparse reconstruction from few linear measurements, and derives generalization bounds by analyzing the Rademacher complexity of hypothesis classes consisting of such deep networks, that also take into account the thresholding parameters.
A Robust Deep Unfolded Network for Sparse Signal Recovery from Noisy Binary Measurements
TLDR
The proposed DeepFPC-`2 network, designed by unfolding the iterations of the fixed-point continuation (FPC) algorithm with one-sided `2-norm, shows higher signal reconstruction accuracy and convergence speed than the traditional FPC- `2 algorithm.
Theoretical Linear Convergence of Deep Unfolding Network for Block-Sparse Signal Recovery
TLDR
The linear convergence rate of the proposed block-sparse reconstruction network, Ada-BlockLISTA, is proved, which theoretically guarantees exact recovery for a potentially higher sparsity level based on underlyingblock structure.
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
TLDR
This paper proposes a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model and develops an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms.
AMP-Net: Denoising-Based Deep Unfolding for Compressive Image Sensing
TLDR
The proposed AMP-Net has better reconstruction accuracy than other state-of-the-art methods with high reconstruction speed and a small number of network parameters and is established by unfolding the iterative denoising process of the well-known approximate message passing algorithm.
Compressive Sensing and Neural Networks from a Statistical Learning Perspective
TLDR
This chapter discusses and presents a generalization error analysis for a class of neural networks suitable for sparse reconstruction from few linear measurements, which is based on bounding the Rademacher complexity of hypothesis classes consisting of such deep networks via Dudley's integral.
AMP-Inspired Deep Networks for Sparse Linear Inverse Problems
TLDR
This paper proposes two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction.
...
...