# On variants of the Johnson–Lindenstrauss lemma

```@article{Matouek2008OnVO,
title={On variants of the Johnson–Lindenstrauss lemma},
author={Jiř{\'i} Matou{\vs}ek},
journal={Random Structures \& Algorithms},
year={2008},
volume={33}
}```
• J. Matoušek
• Published 1 September 2008
• Mathematics, Computer Science
• Random Structures & Algorithms
The Johnson–Lindenstrauss lemma asserts that an n‐point set in any Euclidean space can be mapped to a Euclidean space of dimension k = O(ε‐2 log n) so that all distances are preserved up to a multiplicative factor between 1 − ε and 1 + ε. Known proofs obtain such a mapping as a linear map Rn → Rk with a suitable random matrix. We give a simple and self‐contained proof of a version of the Johnson–Lindenstrauss lemma that subsumes a basic versions by Indyk and Motwani and a version more suitable…
258 Citations
• R. Meka
• Computer Science, Mathematics
ArXiv
• 2010
This work addresses the question of explicitly constructing linear embeddings that satisfy the Johnson-Lindenstrauss property and provides a construction with an almost optimal use of randomness: O(log n loglog n) random bits for embedding n dimensions to O( log(1/delta)/epsilon^2) dimensions with error probability at most delta, and distortion at most epsilon.
• Mathematics, Computer Science
APPROX-RANDOM
• 2011
This work gives explicit constructions with an almost optimal use of randomness of linear embeddings satisfying the Johnson-Lindenstrauss property, showing a lower bound of Ω(log(1/δ)/e2) on the embedding dimension.
• Computer Science
ALENEX
• 2011
This paper presents the first comprehensive study of the empirical behavior of algorithms for dimensionality reduction based on the JL Lemma, and answers a number of important questions about the quality of the embeddings and the performance of algorithms used to compute them.
A refinement of so-called fast Johnson-Lindenstrauss transform, due to Ailon and Chazelle (2006), and Matou\v{s}ek (2008), is proposed. While it preserves the time efficiency and simplicity of
• Computer Science
STOC '10
• 2010
A sparse version of the fundamental tool in dimension reduction -- the Johnson-Lindenstrauss transform is obtained, using hashing and local densification to construct a sparse projection matrix with just ~O(1/ε) non-zero entries per column, and a matching lower bound on the sparsity for a large class of projection matrices is shown.
• Computer Science, Mathematics
Random Struct. Algorithms
• 2011
We prove a variant of a Johnson‐Lindenstrauss lemma for matrices with circulant structure. This approach allows to minimize the randomness used, is easy to implement and provides good running times.
This work develops Johnson-Lindenstrauss distributions with optimal, data-oblivious, statistical confidence bounds, which improve upon prior works in terms of statistical accuracy, as well as exactly determine the no-go regimes for data-OBlivious approaches.
• Computer Science, Mathematics
ArXiv
• 2010
This is the first distribution to provide an asymptotic improvement over the Θ(k) sparsity bound for all values of e, δ, and allows the fastest known streaming algorithms for numerical linear algebra problems such as approximate linear regression and best rank-k approximation to be plugged into algorithms of [Clarkson-Woodruff, STOC 2009].
• Computer Science, Mathematics
ArXiv
• 2013
The bounds in this paper offer a small improvement over the current best bounds for Gaussian circulant JL embeddings for certain parameter regimes and are derived using more direct methods.
• Mathematics, Computer Science
• 2022
These matrices are demonstrated to be JLTs, and their smallest and largest singular values are estimated non-asymptotically in terms of known quantities using a technique from geometric functional analysis, that is, without any unknown “absolute constant” as is often the case in random matrix theory.

## References

SHOWING 1-10 OF 16 REFERENCES

• Mathematics, Computer Science
Random Struct. Algorithms
• 2003
A result of Johnson and Lindenstrauss [13] shows that a set of n points in high dimensional Euclidean space can be mapped into an O(log n/ϵ2)‐dimensional Euclidean space such that the distance
• Computer Science
STOC '06
• 2006
A new low-distortion embedding of l<sub>2</sub><sup>d</sup> into l p (p=1,2) is introduced, called the Fast-Johnson-Linden-strauss-Transform (FJLT), based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform.
• Mathematics
• 1999
Abstract The projection constants of the lpn-spaces for 1 ≦ p ≦ 2 satisfy with in the real case and in the complex case. Further, there is c < 1 such that the projection constant of any n-dimensional
• Computer Science
JACM
• 2005
It is proved that there is no analog of the Johnson--Lindenstrauss lemma for ℓ<inf>1</inf; in fact, embedding with any constant distortion requires <i>n</i><sup>Ω(1)</sup> dimensions.
(Here ll&lltip is the Lipschitz constant of the function g.) A classical result of Kirszbraun's [14, p. 48] states that L(t2, n) = 1 for all n, but it is easy to see that L(X, n) ~ ~ as n ~ ~ for
This book is primarily a textbook introduction to various areas of discrete geometry, in which several key results and methods are explained, in an accessible and concrete manner, in each area.
• Computer Science
STOC '98
• 1998
Two algorithms for the approximate nearest neighbor problem in high-dimensional spaces are presented, which require space that is only polynomial in n and d, while achieving query times that are sub-linear inn and polynometric in d.