• Corpus ID: 238419491

Efficient GPU implementation of randomized SVD and its applications

@article{Struski2021EfficientGI,
  title={Efficient GPU implementation of randomized SVD and its applications},
  author={Lukasz Struski and Pawel M. Morkisz and Przemysław Spurek and S. Rodriguez Bernabeu and Tomasz Trzci'nski},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.03423}
}
Matrix decompositions are ubiquitous in machine learning, including applications in dimensionality reduction, data compression and deep learning algorithms. Typical solutions for matrix decompositions have polynomial complexity which significantly increases their computational cost and time. In this work, we leverage efficient processing operations that can be run in parallel on modern Graphical Processing Units (GPUs), predominant computing architecture used e.g. in deep learning, to reduce… 

Figures and Tables from this paper

Working memory inspired hierarchical video decomposition with transformative representations

TLDR
This study is the first to introduce a visual working memory model in video decomposition to provide interpretable and high-performance hierarchical deep learning architecture, integrating the transformative representations between sensory and control layers from the perspective of visual and cognitive neuroscience.

References

SHOWING 1-10 OF 31 REFERENCES

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

TLDR
This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.

Improved Approximation Algorithms for Large Matrices via Random Projections

  • Tamás Sarlós
  • Computer Science
    2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)
  • 2006
TLDR
The key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation.

Faster Matrix Completion Using Randomized SVD

  • Xu FengWenjian YuYaohang Li
  • Computer Science
    2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)
  • 2018
TLDR
This work proposes two fast randomized algorithms for handling sparse matrix handling and accelerates the singular value thresholding (SVT) method in to realize faster matrix completion using faster randomized singular value decomposition (rSVD).

Randomized LU Decomposition

Fast monte-carlo algorithms for finding low-rank approximations

TLDR
This paper develops an algorithm which is qualitatively faster provided the entries of the matrix are sampled according to a natural probability distribution and the algorithm takes time polynomial in k, 1//spl epsiv/, log(1//spl delta/) only, independent of m, n.

Randomized QR with Column Pivoting

TLDR
This work proposes a truncated QR factorization with column pivoting that avoids trailing matrix updates which are used in current implementations of level-3 BLAS QR and QRCP and demonstrates strong parallel scalability on shared-memory multiple core systems using an implementation in Fortran with OpenMP.

Randomized algorithms for the low-rank approximation of matrices

TLDR
Two recently proposed randomized algorithms for the construction of low-rank approximations to matrices are described and shown to be considerably more efficient and reliable than the classical (deterministic) ones; they also parallelize naturally.

A fast randomized algorithm for the approximation of matrices ✩

A randomized algorithm for the decomposition of matrices

Lossy compression approach to subspace clustering