# Structural Laplacian Eigenmaps for Modeling Sets of Multivariate Sequences

@article{Lewandowski2014StructuralLE, title={Structural Laplacian Eigenmaps for Modeling Sets of Multivariate Sequences}, author={Michal Lewandowski and D. Makris and S. Velastin and Jean-Christophe Nebel}, journal={IEEE Transactions on Cybernetics}, year={2014}, volume={44}, pages={936-949} }

A novel embedding-based dimensionality reduction approach, called structural Laplacian Eigenmaps, is proposed to learn models representing any concept that can be defined by a set of multivariate sequences. This approach relies on the expression of the intrinsic structure of the multivariate sequences in the form of structural constraints, which are imposed on dimensionality reduction process to generate a compact and data-driven manifold in a low dimensional space. This manifold is a… Expand

#### Figures, Tables, and Topics from this paper

#### 21 Citations

Learning representations from multiple manifolds

- Mathematics, Computer Science
- Pattern Recognit.
- 2016

A framework to learn joint embedding space from multiple manifold data, which can preserve the intra-manifolds' local geometric structure and the inter-manIFolds' correspondence structure and works as extensions to current state-of-the-art spectral-embedding approaches to handling multiple manifolds. Expand

Elastic Functional Coding of Riemannian Trajectories

- Mathematics, Computer Science
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- 2017

The TSRVF representation, and accompanying statistical summaries of Riemannian trajectories, are utilized to extend existing coding methods such as PCA, KSVD and Label Consistent KSVD to Riemanni trajectories or more generally to R Siemannian functions, showing that such coding efficiently captures trajectories in applications such as action recognition, stroke rehabilitation, visual speech recognition, clustering and diverse sequence sampling. Expand

Unsupervised feature selection by combining subspace learning with feature self-representation

- Mathematics, Computer Science
- Pattern Recognit. Lett.
- 2018

A novel unsupervised feature selection algorithm that uses the property of the data to construct self- representation coefficient matrix, and utilizes sparse representation to find the sparse structure of the self-representation coefficient matrix and embeds a hypergraph Laplacian regularization term in the representation of multiple relations. Expand

Low-rank unsupervised graph feature selection via feature self-representation

- Computer Science
- Multimedia Tools and Applications
- 2016

This paper proposes a new feature-level self-representation loss function plus a sparsity regularizer (ℓ2,1-norm regularizer) to select representative features, and pushes a low-rank constraint on the coefficient matrix to avoid the impact of noise and outliers. Expand

Structured Manifold Broad Learning System: A Manifold Perspective for Large-Scale Chaotic Time Series Analysis and Prediction

- Computer Science
- IEEE Transactions on Knowledge and Data Engineering
- 2019

A unified framework for nonuniform embedding, dynamical system revealing, and time series prediction, termed as Structured Manifold Broad Learning System (SM-BLS), which provides a homogeneous way to recover the chaotic attractor from multivariate and heterogeneous time series. Expand

Graph self-representation method for unsupervised feature selection

- Computer Science
- Neurocomputing
- 2017

A new unsupervised feature selection by integrating a subspace learning method into a new feature selection method (i.e., Locality Preserving Projection) and adding a graph regularization term into the resulting feature selection model to simultaneously conduct feature selection and subspaceLearning. Expand

Unsupervised Hypergraph Feature Selection with Low-Rank and Self-Representation Constraints

- Computer Science
- ADMA
- 2016

The feature-level self-representation property, a low-rank constraint, a hypergraph regularizer, and a sparsity inducing regularizer are integrated in a unified framework to conduct unsupervised feature selection to achieve the stable feature selection model. Expand

The Chaotic Attractor Analysis of DJIA Based on Manifold Embedding and Laplacian Eigenmaps

- Mathematics
- 2016

By using the techniques of Manifold Embedding and Laplacian Eigenmaps, a novel strategy has been proposed in this paper to detect the chaos of Dow Jones Industrial Average. Firstly, the chaotic… Expand

1D representation of Laplacian eigenmaps and dual k-nearest neighbours for unified video coding

- Computer Science
- IET Image Process.
- 2020

It is evaluated by simulation experiments that the proposed framework of video coding based on Laplacian eigenmaps can attain better performance of BPP and PSNR than that of the state-of-the-art methods, such as highly efficient video coding. Expand

A clustered locally linear approach on face manifolds for pose estimation

- Mathematics, Computer Science
- Pattern Analysis and Applications
- 2016

This paper proposes an unsupervised pose estimation method for face images based on clustered locally linear manifolds using discriminant analysis, and proposes that the local neighbourhood would be linear and provide better between-class separation, and hence, the classification problem would now be simpler. Expand

#### References

SHOWING 1-10 OF 68 REFERENCES

Temporal Extension of Laplacian Eigenmaps for Unsupervised Dimensionality Reduction of Time Series

- Mathematics, Computer Science
- 2010 20th International Conference on Pattern Recognition
- 2010

A novel non-linear dimensionality reduction method, called Temporal Laplacian Eigenmaps, is introduced to process efficiently time series data and its lower computational cost and generalisation abilities suggest it is scalable to larger datasets. Expand

Patch Alignment for Dimensionality Reduction

- Computer Science
- IEEE Transactions on Knowledge and Data Engineering
- 2009

A new dimensionality reduction algorithm is developed, termed discrim inative locality alignment (DLA), by imposing discriminative information in the part optimization stage, and thorough empirical studies demonstrate the effectiveness of DLA compared with representative dimensionality Reduction algorithms. Expand

Using Laplacian eigenmaps latent variable model and manifold learning to improve speech recognition accuracy

- Mathematics, Computer Science
- Speech Commun.
- 2010

This paper demonstrates the application of the Laplacian eigenmaps latent variable model (LELVM) to the task of speech recognition and implies the superiority of the proposed method to the usual PCA methods. Expand

A global geometric framework for nonlinear dimensionality reduction.

- Medicine
- Science
- 2000

An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure. Expand

Topologically-constrained latent variable models

- Mathematics, Computer Science
- ICML '08
- 2008

A range of approaches for embedding data in a non-Euclidean latent space for the Gaussian Process latent variable model allows to learn transitions between motion styles even though such transitions are not present in the data. Expand

Nonlinear dimensionality reduction by locally linear embedding.

- Medicine, Computer Science
- Science
- 2000

Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds. Expand

Separating style and content on a nonlinear manifold

- Mathematics
- CVPR 2004
- 2004

Bilinear and multi-linear models have been successful in decomposing static image ensembles into perceptually orthogonal sources of variations, e.g., separation of style and content. If we consider… Expand

Learning a Joint Manifold Representation from Multiple Data Sets

- Mathematics, Computer Science
- 2010 20th International Conference on Pattern Recognition
- 2010

A framework to learn an embedding of all the points on all the manifolds in a way that preserves the local structure on each manifold and collapses all the different manifolds into one manifold in the embedding space, while preserving the implicit correspondences between the points across different data sets. Expand

Local distance preservation in the GP-LVM through back constraints

- Mathematics, Computer Science
- ICML
- 2006

This paper provides an overview of dimensionality reduction techniques, placing the emphasis on the kind of distance relation preserved, and shows how the GP-LVM can be generalized, through back constraints, to additionally preserve local distances. Expand

Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment

- Mathematics
- 2004

We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local… Expand