# PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation

@article{Tolooshams2021PUDLEIA, title={PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation}, author={Bahareh Tolooshams and Demba E. Ba}, journal={ArXiv}, year={2021}, volume={abs/2106.00058} }

The dictionary learning problem, representing data as a combination of few atoms, has long stood as a popular method for learning representations in statistics and signal processing. The most popular dictionary learning algorithm alternates between sparse coding and dictionary update steps, and a rich literature has studied its theoretical convergence. The growing popularity of neurally plausible unfolded sparse coding networks has led to the empirical finding that backpropagation through such…

## One Citation

Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning

- Computer Science, EngineeringArXiv
- 2021

This work introduces Mixture Model AutoEncoders (MixMate), a novel architecture that clusters data by performing inference on a generative model, derived from the perspective of sparse dictionary learning and mixture models.

## References

SHOWING 1-10 OF 69 REFERENCES

NOODL: Provable Online Dictionary Learning and Sparse Coding

- Computer Science, MathematicsICLR
- 2019

NoODL is developed: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately.

Online dictionary learning for sparse coding

- Computer ScienceICML '09
- 2009

A new online optimization algorithm for dictionary learning is proposed, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.

SCALABLE CONVOLUTIONAL DICTIONARY LEARNING WITH CONSTRAINED RECURRENT SPARSE AUTO-ENCODERS

- Computer Science, Mathematics2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP)
- 2018

The ability of CRsAE to recover the underlying dictionary and characterize its sensitivity as a function of SNR is demonstrated.

Maximal Sparsity with Deep Networks?

- Computer Science, MathematicsNIPS
- 2016

This work demonstrates the potential for a trained deep network to recover minimal $\ell_0$-norm representations in regimes where existing methods fail and deploys it on a practical photometric stereo estimation problem.

ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA

- Computer ScienceICLR
- 2019

This work proposes Analytic LISTA (ALISTA), a feed-forward framework that combines the data-free optimization and ALISTA networks from end to end, one that can be jointly trained to gain robustness to small perturbations in the encoding model.

Understanding Trainable Sparse Coding with Matrix Factorization

- Computer Science, MathematicsICLR
- 2017

The analysis reveals that a specific matrix factorization of the Gram kernel of the dictionary attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the $\ell_1$ ball, and proves that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version.

Learning step sizes for unfolded sparse coding

- Computer Science, MathematicsNeurIPS
- 2019

This paper proposes a network architecture where only the step sizes of ISTA are learned, and demonstrates that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes.

K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation

- 2005

In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by…

Learning Efficient Structured Sparse Models

- Computer Science, MathematicsICML
- 2012

A novel block-coordinate proximal splitting method is developed for the iterative solution of hierarchical sparse coding problems, and an efficient feed forward architecture derived from its iteration faithfully approximates the exact structured sparse codes with a fraction of the complexity of the standard optimization methods.

Learning Sparsely Used Overcomplete Dictionaries

- Computer ScienceCOLT
- 2014

We consider the problem of learning sparsely used overcomplete dictionaries, where each observation is a sparse combination of elements from an unknown overcomplete dictionary. We establish exact…