Even faster integer multiplication

@article{Harvey2016EvenFI,
  title={Even faster integer multiplication},
  author={David Harvey and Joris van der Hoeven and Gr{\'e}goire Lecerf},
  journal={J. Complex.},
  year={2016},
  volume={36},
  pages={1-30}
}

Tables from this paper

Fast arithmetic for faster integer multiplication

TLDR
This work obtains the same result K = 4 using simple modular arithmetic as a building block, and a careful complexity analysis, based on a conjecture about the existence of sufficiently many primes of a certain form.

Fast integer multiplication using \goodbreak generalized Fermat primes

TLDR
An alternative algorithm, which relies on arithmetic modulo generalized Fermat primes, is used to obtain conjecturally the same result K = 4 via a careful complexity analysis in the deterministic multitape Turing model.

Fast integer multiplication using generalized Fermat primes

TLDR
An alternative algorithm, which relies on arithmetic modulo generalized Fermat primes, is used to obtain conjecturally the same result K = 4 via a careful complexity analysis in the deterministic multitape Turing model.

Faster Polynomial Multiplication over Finite Fields

TLDR
This work establishes the bound Mp(n) = O(n log n 8log* n log p), where log* n = min{k ϵ N: log …k×… log n ≤ 1} stands for the iterated logarithm.

Faster polynomial multiplication over nite elds

TLDR
The case that R is the nite eld Fp=Z/pZ for some prime p, the standard bit complexity model based on deterministic multitape Turing machines is more realistic in this setting, as it takes into account the dependence on p.

A babystep-giantstep method for faster deterministic integer factorization

TLDR
This paper combines Strassen's approach with a babystep-giantstep method to improve the currently best known bound by a superpolynomial factor.

Faster integer multiplication using short lattice vectors

TLDR
It is proved that n-bit integers may be multiplied in O(n \log n \, 4^{\log^* n}) bit operations, and depends in an essential way on Minkowski's theorem concerning lattice vectors in symmetric convex sets.

Dirichlet's proof of the three-square theorem: An algorithmic perspective

TLDR
It is explained how to turn Dirichlet’s proof of the Gauss–Legendre three-square theorem into an algorithm; if one assumes the Extended Riemann Hypothesis (ERH), there is a random algorithm for expressing n = x + y + z where the expected number of bit operations is O((lgn)(lg lgn)−1 ·M(lgn).

Efficient Big Integer Multiplication in Cryptography

TLDR
This paper determines the complexities by taking into account the cost of single word multiplication, single word addition and double word addition on different platforms, and presents the best multiplication algorithm complexities for NIST primes on different implementation platforms.

Polynomial multiplication over finite fields in time O(n log n)

Assuming a widely-believed hypothesis concerning the least prime in an arithmetic progression, we show that two n -bit integers can be multiplied in time O ( n log n ) on a Turing machine with a
...

References

SHOWING 1-10 OF 86 REFERENCES

Fast integer multiplication using modular arithmetic

TLDR
An algorithm for multiplying two N-bit integers that improves the O(N • log N • log log N) algorithm by Schönhage-Strassen and can be viewed as a p-adic version of Fürer's algorithm.

Faster Polynomial Multiplication over Finite Fields

TLDR
This work establishes the bound Mp(n) = O(n log n 8log* n log p), where log* n = min{k ϵ N: log …k×… log n ≤ 1} stands for the iterated logarithm.

Faster integer multiplication

TLDR
A major step towards closing the gap from above is presented by presenting an algorithm running in time n log n, 2O(log* n) for boolean circuits as well as for multitape Turing machines, but it has consequences to other models of computation as well.

Faster polynomial multiplication over nite elds

TLDR
The case that R is the nite eld Fp=Z/pZ for some prime p, the standard bit complexity model based on deterministic multitape Turing machines is more realistic in this setting, as it takes into account the dependence on p.

The Complexity of Computations

TLDR
This chapter provides a survey of the main concepts and results of the computational complexity theory, starting with basic concepts such as time and space complexities, complexity classes P and NP, and Boolean circuits and two related ways to make computations more efficient.

Fast Polynomial Multiplication over F260

TLDR
This paper shows how central ideas of the recent asymptotically fast algorithms turn out to be of practical interest for multiplication of polynomials over finite fields of characteristic two, and presents the Mathemagix implementation, which outperforms existing implementations in large degree.

How Fast Can We Multiply Large Integers on an Actual Computer?

TLDR
Two complexity measures that can be used to measure the running time of algorithms to compute multiplications of long integers and do not rank the well known multiplication algorithms the same way as the Turing machine model are provided.

Complexity Of Computations

  • S. Winograd
  • Mathematics, Computer Science
    ACM Annual Conference
  • 1978
TLDR
Construction of algorithms is a time honored mathematical activity, with the whole field of Numerical Analysis devoted to finding a variety of algorithms for numerical integration of differential equations.

Discrete weighted transforms and large-integer arithmetic

TLDR
The concept of Discrete Weighted Transforms (DWTs) are introduced which substantially improve the speed of multiplication by obviating costly zero-padding of digits.

The use of finite fields to compute convolutions

TLDR
If q is a Mersenne prime, one can utilize the fast Fourier transform (FFT) algorithm to yield a fast convolution without the usual roundoff problem of complex numbers.
...