#### Filter Results:

- Full text PDF available (34)

#### Publication Year

2005

2017

- This year (6)
- Last 5 years (21)
- Last 10 years (35)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Vadim Lyubashevsky, Chris Peikert, Oded Regev
- EUROCRYPT
- 2010

The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications.… (More)

- Léo Ducas, Alain Durmus, Tancrède Lepoint, Vadim Lyubashevsky
- IACR Cryptology ePrint Archive
- 2013

Our main result is a construction of a lattice-based digital signature scheme that represents an improvement, both in theory and in practice, over today’s most efficient lattice schemes. The novel scheme is obtained as a result of a modification of the rejection sampling algorithm that is at the heart of Lyubashevsky’s signature scheme (Eurocrypt, 2012) and… (More)

- Vadim Lyubashevsky, Daniele Micciancio
- Electronic Colloquium on Computational Complexity
- 2005

The generalized knapsack problem is the following: given m random elements a1, . . . , am in a ring R, and a target t ∈ R, find z1, . . . , zm ∈ D such that P aizi = t, where D is some fixed subset of R. In (Micciancio, FOCS 2002) it was proved that for appropriate choices of R and D, solving the generalized compact knapsack problem on the average is as… (More)

- Vadim Lyubashevsky
- EUROCRYPT
- 2011

We provide an alternative method for constructing lattice-based digital signatures which does not use the “hash-and-sign” methodology of Gentry, Peikert, and Vaikuntanathan (STOC 2008). Our resulting signature scheme is secure, in the random oracle model, based on the worst-case hardness of the Õ(n)-SIVP problem in general lattices. The secret key, public… (More)

We propose SWIFFT, a collection of compression functions that are highly parallelizable and admit very efficient implementations on modern microprocessors. The main technique underlying our functions is a novel use of the Fast Fourier Transform (FFT) to achieve “diffusion,” together with a linear combination to achieve compression and “confusion.” We… (More)

- Tim Güneysu, Vadim Lyubashevsky, Thomas Pöppelmann
- CHES
- 2012

Nearly all of the currently used and well-tested signature schemes (e.g. RSA or DSA) are based either on the factoring assumption or the presumed intractability of the discrete logarithm problem. Further algorithmic advances on these problems may lead to the unpleasant situation that a large number of schemes have to be replaced with alternatives. In this… (More)

- Vadim Lyubashevsky, Chris Peikert, Oded Regev
- IACR Cryptology ePrint Archive
- 2013

Recent advances in lattice cryptography, mainly stemming from the development of ring-based primitives such as ring-LWE, have made it possible to design cryptographic schemes whose efficiency is competitive with that of more traditional number-theoretic ones, along with entirely new applications like fully homomorphic encryption. Unfortunately, realizing… (More)

- Vadim Lyubashevsky, Daniele Micciancio
- CRYPTO
- 2009

We prove the equivalence, up to a small polynomial approximation factor √ n/ log n, of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the relationship between uSVP and the more standard GapSVP, as… (More)

- Vadim Lyubashevsky, Daniele Micciancio
- IACR Cryptology ePrint Archive
- 2008

We give a direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. The construction is provably secure based on the worst-case hardness of approximating the shortest vector in such lattices within a polynomial factor, and it is also asymptotically efficient: the time complexity… (More)

- Vadim Lyubashevsky
- APPROX-RANDOM
- 2005

In [2], Blum et al. demonstrated the first sub-exponential algorithm for learning the parity function in the presence of noise. They solved the length-n parity problem in time 2 logn) but it required the availability of 2 logn) labeled examples. As an open problem, they asked whether there exists a 2 algorithm for the length-n parity problem that uses only… (More)