The cell probe complexity of dynamic range counting

@inproceedings{Larsen2012TheCP,
  title={The cell probe complexity of dynamic range counting},
  author={Kasper Green Larsen},
  booktitle={STOC '12},
  year={2012}
}
In this paper we develop a new technique for proving lower bounds on the update time and query time of dynamic data structures in the cell probe model. With this technique, we prove the highest lower bound to date for any explicit problem, namely a lower bound of tq=Ω((lg n/lg(wtu))2). Here n is the number of update operations, w the cell size, tq the query time and tu the update time. In the most natural setting of cell size w=Θ(lg n), this gives a lower bound of tq=Ω((lg n/lg lg n)2) for any… 
Crossing the logarithmic barrier for dynamic Boolean data structure lower bounds
TLDR
A new approach is introduced and used to prove a Ω(log1.5 n) lower bound on the operational time of a wide range of boolean data structure problems, most notably, on the query time of dynamic range counting over F2.
New Amortized Cell-Probe Lower Bounds for Dynamic Problems
Higher Cell Probe Lower Bounds for Evaluating Polynomials
  • Kasper Green Larsen
  • Computer Science
    2012 IEEE 53rd Annual Symposium on Foundations of Computer Science
  • 2012
TLDR
The cell probe complexity of evaluating an n-degree polynomial P over a finite field F of size at least n1+Ω(1) is studied to show that any static data structure for evaluating P(x), where x ∈ F, must use Ω(lg|F|/ lg(Sw/n lg |F|) cell probes to answer a query, which is the highest static cell probe lower bound to date.
Tight Static Lower Bounds for Non-Adaptive Data Structures
TLDR
It is proved an $\Omega(\log m / \log (sw/n\log m)$ static cell probe complexity lower bound for non-adaptive data structures that solve the fundamental dictionary problem where $s$ denotes the space of the data structure in the number of cells and $w$ is the cell size in bits.
Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower Bounds
TLDR
A new way of “weakly” simulating dynamic data structures using efficient one-way communication protocols with small advantage over random guessing is introduced, which implies the first super-logarithmic lower bounds on the cell probe complexity of dynamic boolean data structure problems.
Cell-probe lower bounds for dynamic problems via a new communication model
TLDR
A new communication model is developed to prove a data structure lower bound for the dynamic interval union problem, and the sparse set disjointness protocol of Håstad and Wigderson is used to speed up a reduction from a new kind of nondeterministic communication games, for which lower bounds are proved.
New Unconditional Hardness Results for Dynamic and Online Problems
TLDR
Improved unconditional lower bounds for matrix-vector multiplication and a version of dynamic set disjointness known as Patrascu's Multiphase Problem are given by studying the cell probe complexity of two conjectured to be hard problems of particular importance.
Amortized Dynamic Cell-Probe Lower Bounds from Four-Party Communication
This paper develops a new technique for proving amortized, randomized cell-probe lower bounds on dynamic data structure problems. We introduce a new randomized nondeterministic four-party
3SUM Hardness in (Dynamic) Data Structures
We prove lower bounds for several (dynamic) data structure problems conditioned on the well known conjecture that 3SUM cannot be solved in $O(n^{2-\Omega(1)})$ time. This continues a line of work
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 15 REFERENCES
Unifying the Landscape of Cell-Probe Lower Bounds
  • M. Patrascu
  • Computer Science, Mathematics
    SIAM J. Comput.
  • 2011
We show that a large fraction of the data-structure lower bounds known today in fact follow by reduction from the communication complexity of lopsided (asymmetric) set disjointness. This includes
Lower bounds for 2-dimensional range counting
TLDR
One of the most basic range problem, orthogonal range counting in two dimensions, is considered, and almost optimal bounds in the group model and the (holy grail) cell-probe model are shown.
(Data) STRUCTURES
We show that a large fraction of the data-structure lower bounds known today in fact follow by reduction from the communication complexity of lopsided (asymmetric) set disjointness! This includes
Unifying the Landscape of Cell-Probe Lower Bounds
We show that a large fraction of the data-structure lower bounds known today in fact follow by reduction from the communication complexity of lopsided (asymmetric) set disjointness. This includes
Logarithmic Lower Bounds in the Cell-Probe Model
TLDR
A new technique for proving cell-probe lower bounds on dynamic data structures is developed, which enables an amortized randomized $\Omega(\lg n)$ lower bound per operation for several data structural problems on $n$ elements, including partial sums, dynamic connectivity among disjoint paths, and several other dynamic graph problems (by simple reductions).
The cell probe complexity of dynamic data structures
TLDR
New lower and upper bounds on the time per operation are proved to implement solutions to some familiar dynamic data structure problems including list representation, subset ranking, partial sums, and the set union problem.
Don't rush into a union: take time to find your roots
TLDR
A new threshold phenomenon in data structure lower bounds where slightly reduced update times lead to exploding query times is presented where the data structure doesn't have time to find the roots of each disjoint set (tree) during edge insertion, there is no effective way to organize the information.
Towards polynomial lower bounds for dynamic problems
TLDR
This work describes a carefully-chosen dynamic version of set disjointness (the "multiphase problem"), and conjecture that it requires n^Omega(1) time per operation, and forms the first nonalgebraic reduction from 3SUM, which allows3SUM-hardness results for combinatorial problems.
Lower Bounds on Near Neighbor Search via Metric Expansion
TLDR
This work reduces the problem of proving cell probe lower bounds of near neighbor search to computing the appropriate expansion parameter, and shows a much stronger (tight) time-space tradeoff for the class of dynamic data structures that supports updates in the data set and that do not look up any single cell too often.
Probabilistic computations: Toward a unified measure of complexity
  • A. Yao
  • Mathematics
    18th Annual Symposium on Foundations of Computer Science (sfcs 1977)
  • 1977
TLDR
Two approaches to the study of expected running time of algoritruns lead naturally to two different definitions of intrinsic complexity of a problem, which are the distributional complexity and the randomized complexity, respectively.
...
1
2
...