# Lower bounds for randomized mutual exclusion

@article{Kushilevitz1993LowerBF, title={Lower bounds for randomized mutual exclusion}, author={Eyal Kushilevitz and Y. Mansour and Michael O. Rabin and David Zuckerman}, journal={SIAM J. Comput.}, year={1993}, volume={27}, pages={1550-1563} }

We establish, for the rst time, lower bounds for randomized mutual exclusion algorithms (with a read-modify-write operation). Our main result is that a constant-size shared variable cannot guarantee strong fairness, even if randomization is allowed. In fact, we prove a lower bound of ›(log logn) bits on the size of the shared variable, which is also tight. We investigate weaker fairness conditions and derive tight (upper and lower) bounds for them as well. Surprisingly, it turns out that…

## 13 Citations

Probabilistic Indistinguishability and the Quality of Validity in Byzantine Agreement

- Computer ScienceDISC
- 2021

A tight bound on the probability of honest parties deciding on a possibly bogus value is provided and it is proved that, in a precise sense, no algorithm can do better.

Hundreds of impossibility results for distributed computing

- Computer ScienceDistributed Computing
- 2003

Abstract.We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered…

On the space complexity of randomized synchronization

- Computer SciencePODC '93
- 1993

It is shown that, using historyless objects, V(=n) object instances are necessary to solve n-process consensus, and this lower bound holds even if the objects have unbounded size and the termination requirement is nondeterministic solo termination, a property strictly weaker than randomized wait-freedom.

On randomization in sequential and distributed algorithms

- Computer ScienceCSUR
- 1994

This survey presents five techniques that have been widely used in the design of randomized algorithms, illustrated using 12 randomized algorithms that span a wide range of applications, including:primality testing, interactive probabilistic proof systems, dining philosophers, and Byzantine agreement.

Optimal Randomized Scheduling by Replacement

- Computer ScienceJ. Comb. Optim.
- 1998

The framework of this work combines an absolute performance measure for protocols and so-called adaptive online adversaries and makes explicit how the protocol and the adversary affect the probability distribution of the analysis—a very general problem.

Proving Lower Bounds and Formalizing Knowledge in RandomizedComputing : a General Randomized Model

- Computer Science, Mathematics
- 2008

A general model for randomized distributed computing that allows to model precisely the notion of knowledge and allows to support formal proofs of probabilistic impossibility is presented.

Shared-memory mutual exclusion: major research trends since 1986

- Computer ScienceDistributed Computing
- 2003

This paper surveys major research trends since 1986 in work on shared-memory mutual exclusion with a focus on algorithms for mutual exclusion.

On Lotteries with Unique Winners

- Economics, MathematicsSIAM J. Discret. Math.
- 1995

Lotteries with the unique maximum property and the unique winner property are considered. Tight lower bounds are proven on the domain size of such lotteries.

A Complete Bibliography of Publications in the ACM Symposia on Theory of Computing (STOC) for 1960-1969

- Computer Science
- 2001

Ajt12, BNT13, Bel12, Ber13, CGW13, HKL11, Kay12, KS13, KNP11, LW11, LPS13, NSV11], accepting [Hua13]; adaptive [HW13].

## References

SHOWING 1-10 OF 25 REFERENCES

Randomized mutual exclusion algorithms revisited

- MathematicsPODC '92
- 1992

Randomization yields simple algorithms for mutual-exclusion with bounded waiting, employing a shared variable of considerably smaller size than the lower-bound established in [1] for deterministic algorithms.

Proving probabilistic correctness statements: the case of Rabin's algorithm for mutual exclusion

- Computer SciencePODC '92
- 1992

This paper presents a general methodology to prove correctness statements of randomized distributed algorithms by a series of refinements, which terminate in a statement independent of the schedule.

Another advantage of free choice (Extended Abstract): Completely asynchronous agreement protocols

- MathematicsPODC '83
- 1983

This work exhibits a probabilistic solution for this problem, which guarantees that as long as a majority of the processes continues to operate, a decision will be made (Theorem 1).

N-Process Mutual Exclusion with Bounded Waiting by 4 Log_2 N-Valued Shared Variable

- Computer ScienceJ. Comput. Syst. Sci.
- 1982

On processor coordination using asynchronous hardware

- Computer SciencePODC '87
- 1987

It is shown that the coordination problem cannot be solved by means of a deterministic protocol even if the system consists of only two processors, and the impossibility result holds for the most powerful type of shared atomic registers and does not assume symmetric protocols.

A Lower Bound for the Time to Assure Interactive Consistency

- Computer ScienceInf. Process. Lett.
- 1982

Probabilistic computations: Toward a unified measure of complexity

- Mathematics18th Annual Symposium on Foundations of Computer Science (sfcs 1977)
- 1977

Two approaches to the study of expected running time of algoritruns lead naturally to two different definitions of intrinsic complexity of a problem, which are the distributional complexity and the randomized complexity, respectively.

Data Requirements for Implementation of N-Process Mutual Exclusion Using a Single Shared Variable

- Computer ScienceJACM
- 1982

An analysis is made of the shared memory requirements for implementing mutual excluslon of N asynchronous parallel processes m a model where the only primitive communication mechamsm is a general…

Optimal algorithms for Byzantine agreement

- Computer ScienceSTOC '88
- 1988

For both synchronous and asynchronous networks whose lines do not guarantee private communication, the authors may use cryptography to obtain algorithms optimal both in fault tolerance and running time against computationally bounded adversaries.