LEt-SNE: A Hybrid Approach to Data Embedding and Visualization Of Hyperspectral Imagery

@article{Shukla2020LEtSNEAH,
  title={LEt-SNE: A Hybrid Approach to Data Embedding and Visualization Of Hyperspectral Imagery},
  author={Megh Shukla and Biplab Banerjee and Krishna Mohan Buddhiraju},
  journal={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2020},
  pages={3722-3726}
}
Hyperspectral Imagery (and Remote Sensing in general) captured from UAVs or satellites are highly voluminous in nature due to the large spatial extent and wavelengths captured by them. Since analyzing these images requires a huge amount of computational time and power, various dimensionality reduction techniques have been used for feature reduction. Some popular techniques among these falter when applied to Hyperspectral Imagery due to the famed curse of dimensionality. In this paper, we… Expand
Bayesian Uncertainty and Expected Gradient Length - Regression: Two Sides Of The Same Coin?
TLDR
This work shows that expected gradient length in regression is equivalent to Bayesian uncertainty, and performs experimental validation on two human pose datasets (MPII and LSP/LSPET), highlighting the interpretability and competitiveness of EGL++ with different active learning algorithms for human pose estimation. Expand

References

SHOWING 1-10 OF 24 REFERENCES
Spherical Stochastic Neighbor Embedding of Hyperspectral Data
  • D. Lunga, O. Ersoy
  • Mathematics, Computer Science
  • IEEE Transactions on Geoscience and Remote Sensing
  • 2013
TLDR
A novel approach that embeds hyperspectral data, transformed into bilateral probability similarities, onto a nonlinear unit norm coordinate system based on a stochastic objective function of spherical coordinates, which allows the use of an Exit probability distribution to discover the nonlinear characteristics that are inherent in hyperspectrals data. Expand
Visualizing Data using t-SNE
We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of StochasticExpand
Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
TLDR
This work proposes a geometrically motivated algorithm for representing the high-dimensional data that provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Expand
Nonlinear dimensionality reduction by locally linear embedding.
TLDR
Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds. Expand
Algorithms for manifold learning
TLDR
The motivation, background, and algorithms proposed for manifold learning are discussed and Isomap, Locally Linear Embedding, Laplacian Eigenmaps, Semidefinite Embeddings, and a host of variants of these algorithms are examined. Expand
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
TLDR
This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. Expand
Learning a Parametric Embedding by Preserving Local Structure
TLDR
The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space, and evaluates the performance in experiments on three datasets. Expand
Stochastic Optimization for Kernel PCA
TLDR
This work forms kernel PCA as a stochastic composite optimization problem, where a nuclear norm regularizer is introduced to promote low-rankness, and develops a simple algorithm based on stochastically proximal gradient descent that converges to the optimal one at an O(1/T) rate. Expand
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
TLDR
The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Expand
SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
TLDR
A new superpixel algorithm is introduced, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels and is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation. Expand
...
1
2
3
...