# Incremental Manifold Learning Via Tangent Space Alignment

@inproceedings{Liu2006IncrementalML,
title={Incremental Manifold Learning Via Tangent Space Alignment},
author={Xiaoming Liu and Jianwei Yin and Zhilin Feng and Jinxiang Dong},
booktitle={IAPR International Workshop on Artificial Neural Networks in Pattern Recognition},
year={2006}
}
• Published in
IAPR International Workshop…
31 August 2006
• Computer Science
Several algorithms have been proposed to analysis the structure of high-dimensional data based on the notion of manifold learning. They have been used to extract the intrinsic characteristic of different type of high-dimensional data by performing nonlinear dimensionality reduction. Most of them operate in a “batch” mode and cannot be efficiently applied when data are collected sequentially. In this paper, we proposed an incremental version (ILTSA) of LTSA (Local Tangent Space Alignment), which…

### A Manifold Learning Algorithm Based on Incremental Tangent Space Alignment

• Computer Science
ICCCS
• 2016
A new manifold Learning algorithm based on Incremental Tangent Space Alignment, LITSA for short is proposed that can achieve a more accurate low-dimensional representation of the data than state-of-the-art incremental algorithms.

### Incremental Construction of Low-Dimensional Data Representations

• Computer Science
ANNPR
• 2016
Incremental version of the Grassmann&Stiefel Eigenmaps manifold learning algorithm, which has asymptotically minimal reconstruction error, is proposed in this paper and has significantly smaller computational complexity in contrast to the initial algorithm.

### A Dictionary-Based Algorithm for Dimensionality Reduction and Data Reconstruction

• Computer Science
2014 22nd International Conference on Pattern Recognition
• 2014
A dictionary-based algorithm to deal with the out-of-sample extension problem for large-scale DR task and it is shown that, for both dimensionality reduction and data reconstruction, the algorithm is accurate and fast.

### The Estimation Algorithm of Laplacian Eigenvalues for the Tangent Bundle

• Computer Science
2011 Seventh International Conference on Computational Intelligence and Security
• 2011
An estimation algorithm of Laplacian Eigenvalues is presented and the LE algorithm's advantage of maintaining the local geometry to remaining globally through the global coordinates of the tangent bundle is expanded.

### Learning to detect concepts with Approximate Laplacian Eigenmaps in large-scale and online settings

• Computer Science
International Journal of Multimedia Information Retrieval
• 2015
It is demonstrated that Approximate Laplacian Eigenmaps, which constitute a latent representation of the manifold underlying a set of images, offer a compact yet effective feature representation for the problem of concept detection.

### Information preserving and locally isometric&conformal embedding via Tangent Manifold Learning

• Computer Science, Mathematics
2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA)
• 2015
A new geometrically motivated locally isometric and conformal representation method is proposed, which employs Tangent Manifold Learning technique consisting in sample-based estimation of tangent spaces to the unknown Data manifold.

## References

SHOWING 1-10 OF 14 REFERENCES

### Incremental nonlinear dimensionality reduction by manifold learning

• Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
• 2006
An incremental version of ISOMAP, one of the key manifold learning algorithms, is described and it is demonstrated that this modified algorithm can maintain an accurate low-dimensional representation of the data in an efficient manner.

### Selecting Landmark Points for Sparse Manifold Learning

• Computer Science
NIPS
• 2005
This paper presents an algorithm for selecting landmarks, based on LASSO regression, which is well known to favor sparse approximations because it uses regularization with an l1 norm, and a continuous manifold parameterization is found.

### Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment

We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local

### Nonlinear dimensionality reduction by locally linear embedding.

• Computer Science
Science
• 2000
Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.

### A global geometric framework for nonlinear dimensionality reduction.

• Computer Science
Science
• 2000
An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.

### Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data

• Computer Science, Mathematics
Proceedings of the National Academy of Sciences of the United States of America
• 2003
The Hessian-based locally linear embedding method for recovering the underlying parametrization of scattered data (mi) lying on a manifold M embedded in high-dimensional Euclidean space is described, where the isometric coordinates can be recovered up to a linear isometry.

### A Domain Decomposition Method for Fast Manifold Learning

• Computer Science
NIPS
• 2005
A fast manifold learning algorithm based on the methodology of domain decomposition is proposed that can glue the embeddings on the two subdomains into an embedding on the whole domain.

### Least angle regression

• Computer Science
• 2004
A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.

### Numerical methods for computing angles between linear subspaces

• Mathematics
Milestones in Matrix Computation
• 2007
Experimental results are given, which indicates that MGS gives $\theta_k$ with equal precision and fewer arithmetic operations than HT, however, HT gives principal vectors, which are orthogonal to working accuracy, which is not in general true for MGS.