Learn More
We revisit the problem of generating a ‘hard’ random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure to generate public/secret key pairs. In these applications, a shorter basis corresponds to milder underlying complexity assumptions and smaller(More)
We study the design of cryptographic primitives resilient to key-leakage attacks, where an attacker can repeatedly and adaptively learn information about the secret key, subject only to the constraint that the overall amount of such information is bounded by some parameter l. We construct a variety of leakage-resilient public-key systems including the first(More)
We construct the first public-key encryption scheme in the Bounded-Retrieval Model (BRM), providing security against various forms of adversarial “key leakage” attacks. In this model, the adversary is allowed to learn arbitrary information about the decryption key, subject only to the constraint that the overall amount of “leakage” is bounded by at most `(More)
We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of certain functions in models of parallel computation. We apply the tools to construct a class of functions with high amortized memory complexity in the *parallel* Random Oracle Model (pROM); a variant of the standard ROM allowing for batches of *simultaneous* queries.(More)
A memory-hard function (MHF) f is equipped with a space cost σ and time cost τ parameter such that repeatedly computing fσ,τ on an application specific integrated circuit (ASIC) is not economically advantageous relative to a general purpose computer. Technically we would like that any (generalized) circuit for evaluating an iMHF fσ,τ has area × time (AT)(More)
This survey paper studies recent advances in the field of LeakageResilient Cryptography. This booming area is concerned with the design of cryptographic primitives resistant to arbitrary side-channel attacks, where an attacker can repeatedly and adaptively learn information about the secret key, subject only to the constraint that the overall amount of such(More)
We investigate lower bounds in terms of time and memory on the parallel complexity of an adversary A computing labels of randomly selected challenge nodes in direct acyclic graphs, where the w-bit label of a node is the hash h (modelled as a random oracle with w-bit output) of the labels of its parents. Specific instances of this general problem underlie(More)
The algorithm Argon2i-B of Biryukov, Dinu and Khovratovich is currently being considered by the IRTF (Internet Research Task Force) as a new de-facto standard for password hashing. An older version (Argon2i-A) of the same algorithm was chosen as the winner of the recent Password Hashing Competition. An important competitor to Argon2i-B is the recently(More)
The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen [BPR12] at EUROCRYPT ’12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial.(More)
Composable notions of incoercibility aim to forbid a coercer from using anything beyond the coerced parties’ inputs and outputs to catch them when they try to deceive him. Existing definitions are restricted to weak coercion types, and/or are not universally composable. Furthermore, they often make too strong assumptions on the knowledge of coerced(More)