On variants of the Johnson–Lindenstrauss lemma

@article{Matouek2008OnVO,
  title={On variants of the Johnson–Lindenstrauss lemma},
  author={Jiř{\'i} Matou{\vs}ek},
  journal={Random Structures \& Algorithms},
  year={2008},
  volume={33}
}
  • J. Matoušek
  • Published 1 September 2008
  • Mathematics, Computer Science
  • Random Structures & Algorithms
The Johnson–Lindenstrauss lemma asserts that an n‐point set in any Euclidean space can be mapped to a Euclidean space of dimension k = O(ε‐2 log n) so that all distances are preserved up to a multiplicative factor between 1 − ε and 1 + ε. Known proofs obtain such a mapping as a linear map Rn → Rk with a suitable random matrix. We give a simple and self‐contained proof of a version of the Johnson–Lindenstrauss lemma that subsumes a basic versions by Indyk and Motwani and a version more suitable… 

Almost Optimal Explicit Johnson-Lindenstrauss Transformations

  • R. Meka
  • Computer Science, Mathematics
    ArXiv
  • 2010
This work addresses the question of explicitly constructing linear embeddings that satisfy the Johnson-Lindenstrauss property and provides a construction with an almost optimal use of randomness: O(log n loglog n) random bits for embedding n dimensions to O( log(1/delta)/epsilon^2) dimensions with error probability at most delta, and distortion at most epsilon.

Almost Optimal Explicit Johnson-Lindenstrauss Families

This work gives explicit constructions with an almost optimal use of randomness of linear embeddings satisfying the Johnson-Lindenstrauss property, showing a lower bound of Ω(log(1/δ)/e2) on the embedding dimension.

The Johnson-Lindenstrauss Transform: An Empirical Study

This paper presents the first comprehensive study of the empirical behavior of algorithms for dimensionality reduction based on the JL Lemma, and answers a number of important questions about the quality of the embeddings and the performance of algorithms used to compute them.

On randomness reduction in the Johnson-Lindenstrauss lemma

A refinement of so-called fast Johnson-Lindenstrauss transform, due to Ailon and Chazelle (2006), and Matou\v{s}ek (2008), is proposed. While it preserves the time efficiency and simplicity of

A sparse Johnson: Lindenstrauss transform

A sparse version of the fundamental tool in dimension reduction -- the Johnson-Lindenstrauss transform is obtained, using hashing and local densification to construct a sparse projection matrix with just ~O(1/ε) non-zero entries per column, and a matching lower bound on the sparsity for a large class of projection matrices is shown.

Johnson‐Lindenstrauss lemma for circulant matrices* *

We prove a variant of a Johnson‐Lindenstrauss lemma for matrices with circulant structure. This approach allows to minimize the randomness used, is easy to implement and provides good running times.

Johnson-Lindenstrauss Transforms with Best Confidence

This work develops Johnson-Lindenstrauss distributions with optimal, data-oblivious, statistical confidence bounds, which improve upon prior works in terms of statistical accuracy, as well as exactly determine the no-go regimes for data-OBlivious approaches.

A Sparser Johnson-Lindenstrauss Transform

This is the first distribution to provide an asymptotic improvement over the Θ(k) sparsity bound for all values of e, δ, and allows the fastest known streaming algorithms for numerical linear algebra problems such as approximate linear regression and best rank-k approximation to be plugged into algorithms of [Clarkson-Woodruff, STOC 2009].

New bounds for circulant Johnson-Lindenstrauss embeddings

The bounds in this paper offer a small improvement over the current best bounds for Gaussian circulant JL embeddings for certain parameter regimes and are derived using more direct methods.

Hashing-like Johnson--Lindenstrauss transforms and their extreme singular values

These matrices are demonstrated to be JLTs, and their smallest and largest singular values are estimated non-asymptotically in terms of known quantities using a technique from geometric functional analysis, that is, without any unknown “absolute constant” as is often the case in random matrix theory.
...

References

SHOWING 1-10 OF 16 REFERENCES

An elementary proof of a theorem of Johnson and Lindenstrauss

A result of Johnson and Lindenstrauss [13] shows that a set of n points in high dimensional Euclidean space can be mapped into an O(log n/ϵ2)‐dimensional Euclidean space such that the distance

The Johnson-Lindenstrauss lemma and the sphericity of some graphs

Database-friendly random projections: Johnson-Lindenstrauss with binary coins

  • D. Achlioptas
  • Computer Science, Mathematics
    J. Comput. Syst. Sci.
  • 2003

Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform

A new low-distortion embedding of l<sub>2</sub><sup>d</sup> into l p (p=1,2) is introduced, called the Fast-Johnson-Linden-strauss-Transform (FJLT), based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform.

Projection constants of symmetric spaces and variants of Khintchine's inequality

Abstract The projection constants of the lpn-spaces for 1 ≦ p ≦ 2 satisfy with in the real case and in the complex case. Further, there is c < 1 such that the projection constant of any n-dimensional

On the impossibility of dimension reduction in l1

It is proved that there is no analog of the Johnson--Lindenstrauss lemma for ℓ<inf>1</inf; in fact, embedding with any constant distortion requires <i>n</i><sup>Ω(1)</sup> dimensions.

Extensions of Lipschitz mappings into Hilbert space

(Here ll&lltip is the Lipschitz constant of the function g.) A classical result of Kirszbraun's [14, p. 48] states that L(t2, n) = 1 for all n, but it is easy to see that L(X, n) ~ ~ as n ~ ~ for

Lectures on discrete geometry

This book is primarily a textbook introduction to various areas of discrete geometry, in which several key results and methods are explained, in an accessible and concrete manner, in each area.

Problems and results in extremal combinatorics--I

  • N. Alon
  • Mathematics
    Discret. Math.
  • 2003

Approximate nearest neighbors: towards removing the curse of dimensionality

Two algorithms for the approximate nearest neighbor problem in high-dimensional spaces are presented, which require space that is only polynomial in n and d, while achieving query times that are sub-linear inn and polynometric in d.