• Corpus ID: 11460300

Provably Correct Algorithms for Matrix Column Subset Selection with Selectively Sampled Data

@article{Wang2017ProvablyCA,
  title={Provably Correct Algorithms for Matrix Column Subset Selection with Selectively Sampled Data},
  author={Yining Wang and Aarti Singh},
  journal={J. Mach. Learn. Res.},
  year={2017},
  volume={18},
  pages={156:1-156:42}
}
We consider the problem of matrix column subset selection, which selects a subset of columns from an input matrix such that the input can be well approximated by the span of the selected columns. Column subset selection has been applied to numerous real-world data applications such as population genetics summarization, electronic circuits testing and recommendation systems. In many applications the complete data matrix is unavailable and one needs to select representative columns by inspecting… 

Figures and Tables from this paper

Residual Based Sampling for Online Low Rank Approximation
TLDR
The core of the approach is an adaptive sampling technique that gives a practical and efficient algorithm for both Column Subset Selection and Principal Component Analysis, and proves that by sampling columns using their 'residual norm'' (i.e. their norm orthogonal to directions sampled so far), they end up with a significantly better dependence between the number of columns sampled, and the desired error in the approximation.
Residual Based Sampling for Online Low Rank Approximation
TLDR
The core of the approach is an adaptive sampling technique that gives a practical and efficient algorithm for both Column Subset Selection and Principal Component Analysis, and it is proved that by sampling columns using their "residual norm" (i.e. their norm orthogonal to directions sampled so far), they end up with a significantly better dependence between the number of columns sampled, and the desired error in the approximation.
Select to Better Learn: Fast and Accurate Deep Learning Using Data Selection From Nonlinear Manifolds
TLDR
This work proposes a simple and efficient selection algorithm with a linear complexity order, referred to as spectrum pursuit (SP), that pursuits spectral components of the dataset using available sample points and extends the underlying linear model to more complex models such as nonlinear manifolds and graph-based models.
L G ] 3 0 O ct 2 01 9 Optimal Analysis of Subset-Selection Based l p Low-Rank Approximation
  • Chen Dan
  • Computer Science, Mathematics
  • 2019
TLDR
This work studies the low rank approximation problem of any given matrix A over R and C in entry-wise lp loss, that is, finding a rank-k matrix X such that ‖A−X‖p is minimized, and analyzes a polynomial time bi-criteria algorithm which selects O(k logm) columns.
Optimal Analysis of Subset-Selection Based L_p Low Rank Approximation
TLDR
The techniques are an application of Riesz-Thorin interpolation theorem from harmonic analysis, which might be of independent interest to other algorithmic designs and analysis more broadly and results in improvements on approximation guarantees of several other algorithms with various time complexity.
Near-optimal discrete optimization for experimental design: a regret minimization approach
TLDR
A polynomial-time regret minimization framework is proposed to achieve a 1-1 + ε approximation with only O(p/\varepsilon ^2) design points, for all the optimality criteria above.
Perturbations of CUR Decompositions
TLDR
P perturbation estimates for several variants of the CUR decomposition are given and illustrate how the choice of columns and rows affects the quality of the approximation, and additionally new state-of-the-art bounds for some variants of CUR approximations are obtained.
Robust Training in High Dimensions via Block Coordinate Geometric Median Descent
TLDR
By applying GM to only a judiciously chosen block of coordinates at a time and using a memory mechanism, one can retain the breakdown point of 1/2 for smooth non-convex problems, with non-asymptotic convergence rates comparable to the SGD with GM while resulting in significant speedup in training.
Cross: Efficient Low-rank Tensor Completion
TLDR
This article proposes a framework for low-rank tensor completion via a novel tensor measurement scheme called Cross and develops a theoretical upper bound and the matching minimax lower bound for recovery error over certain classes of low- rank tensors for the proposed procedure.
...
...

References

SHOWING 1-10 OF 44 REFERENCES
Column Subset Selection with Missing Data
TLDR
Simulation results confirm that the problem formulation together with a block variant of the orthogonal matching pursuit algorithm often outperforms rank-revealing QR factorization, the standard choice for column subset selection, run on a zero-filled data matrix.
An improved approximation algorithm for the column subset selection problem
TLDR
A novel two-stage algorithm that runs in O(min{mn2, m2n}) time and returns as output an m x k matrix C consisting of exactly k columns of A, and it is proved that the spectral norm bound improves upon the best previously-existing result and is roughly O(√k!) better than the best previous algorithmic result.
On the Power of Adaptivity in Matrix Completion and Approximation
We consider the related tasks of matrix completion and matrix approximation from missing data and propose adaptive sampling procedures for both problems. We show that adaptive sampling allows one to
Uniform Sampling for Matrix Approximation
TLDR
It is shown that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original, which leads to simple iterative row sampling algorithms for matrix approximation that run in input-sparsity time and preserve row structure and sparsity at all intermediate steps.
CUR Algorithm for Partially Observed Matrices
TLDR
It is shown that only O(nr ln r) observed entries are needed by the proposed algorithm to perfectly recover a rank r matrix of size n × n, which improves the sample complexity of the existing algorithms for matrix completion.
Completing any low-rank matrix, provably
TLDR
It is shown that any low-rank matrix can be exactly recovered from as few as $O(nr \log^2 n)$ randomly chosen elements, provided this random choice is made according to a {\em specific biased distribution}: the probability of any element being sampled should be proportional to the sum of the leverage scores of the corresponding row, and column.
Fast approximation of matrix coherence and statistical leverage
TLDR
A randomized algorithm is proposed that takes as input an arbitrary n × d matrix A, with n ≫ d, and returns, as output, relative-error approximations to all n of the statistical leverage scores.
Matrix approximation and projective clustering via volume sampling
TLDR
This paper proves that the additive error drops exponentially by iterating the sampling in an adaptive manner, and gives a pass-efficient algorithm for computing low-rank approximation with reduced additive error.
Relative-Error CUR Matrix Decompositions
TLDR
These two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist.
Near-Optimal Entrywise Sampling for Data Matrices
TLDR
This work considers the problem of selecting non-zero entries of a matrix A in order to produce a sparse sketch of it, B, that minimizes A-B, and gives sampling distributions that exhibit four important properties.
...
...