Corpus ID: 11011017

Incomplete Pivoted QR-based Dimensionality Reduction

@article{Bermanis2016IncompletePQ,
  title={Incomplete Pivoted QR-based Dimensionality Reduction},
  author={Amit Bermanis and Aviv Rotbart and Moshe Salhov and Amir Averbuch},
  journal={ArXiv},
  year={2016},
  volume={abs/1607.03456}
}
High-dimensional big data appears in many research fields such as image recognition, biology and collaborative filtering. Often, the exploration of such data by classic algorithms is encountered with difficulties due to `curse of dimensionality' phenomenon. Therefore, dimensionality reduction methods are applied to the data prior to its analysis. Many of these methods are based on principal components analysis, which is statistically driven, namely they map the data into a low-dimension… Expand
2 Citations
Analysis of Human Brain Structure Reveals that the Brain “Types” Typical of Males Are Also Typical of Females, and Vice Versa
TLDR
The present findings demonstrate that sex category, whether one is female or male, is not a major predictor of the variability of human brain structure, and the brain types typical of females are also typical of males, and vice versa, and large sex differences are found only in the prevalence of some rare brain types. Expand
Scalability and robustness of spectral embedding: landmark diffusion is all you need
While spectral embedding is a widely applied dimension reduction technique in various fields, so far it is still challenging to make it scalable and robust to handle "big data". Motivated by the needExpand

References

SHOWING 1-10 OF 51 REFERENCES
Cover-based bounds on the numerical rank of Gaussian kernels
Abstract A popular approach for analyzing high-dimensional datasets is to perform dimensionality reduction by applying non-parametric affinity kernels. Usually, it is assumed that the representedExpand
A global geometric framework for nonlinear dimensionality reduction.
TLDR
An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure. Expand
Approximately-isometric diffusion maps
Abstract Diffusion Maps (DM), and other kernel methods, are utilized for the analysis of high dimensional datasets. The DM method uses a Markovian diffusion process to model and analyze data. AExpand
Nonlinear dimensionality reduction by locally linear embedding.
TLDR
Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds. Expand
Unsupervised feature selection for principal components analysis
TLDR
This work considers the topic of unsupervised feature selection for PCA, by leveraging algorithms for the so-called Column Subset Selection Problem (CSSP), and presents a novel two-stage algorithm for the CSSP. Expand
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
TLDR
This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. Expand
Subspace Sampling and Relative-Error Matrix Approximation: Column-Based Methods
TLDR
Two polynomial time randomized algorithms that take as input a matrix A and return as output a matrix C sample the columns of A via the method of “subspace sampling,” so-named since the sampling probabilities depend on the lengths of the rows of the top singular vectors and since they ensure that they capture entirely a certain subspace of interest. Expand
CUR matrix decompositions for improved data analysis
TLDR
An algorithm is presented that preferentially chooses columns and rows that exhibit high “statistical leverage” and exert a disproportionately large “influence” on the best low-rank fit of the data matrix, obtaining improved relative-error and constant-factor approximation guarantees in worst-case analysis, as opposed to the much coarser additive-error guarantees of prior work. Expand
Relative-Error CUR Matrix Decompositions
TLDR
These two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist. Expand
Fast low-rank modifications of the thin singular value decomposition
This paper develops an identity for additive modivations of a singular value decomposition (SVD) to reflect updates, downdates, shifts, and edits of the data matrix. This sets the stage for fast andExpand
...
1
2
3
4
5
...