Optimal speedup of Las Vegas algorithms

@article{Luby1993OptimalSO,
  title={Optimal speedup of Las Vegas algorithms},
  author={Michael Luby and Alistair Sinclair and David Zuckerman},
  journal={[1993] The 2nd Israel Symposium on Theory and Computing Systems},
  year={1993},
  pages={128-133}
}
Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when its stops but whose running time is a random variable. The authors consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t/sub 1/, then run A independent for a fixed amount of time t/sub 2/, etc. The simulation stops if A completes its execution during any of the runs. Let… 
Restart Strategies in a Continuous Setting
TLDR
This work obtains an optimal universal strategy on a restricted class of continuous probability distributions and shows that there are no (asymptotically) optimal strategies in a continuous setting.
An Empirical Study of a New Restart Strategy for Randomized Backtrack Search
TLDR
An improved restart strategy for randomized back track search is proposed and the performance of this technique is compared to a number of heuristic and stochastic search techniques, including RGR, using the cumulative distribution of the solutions.
Using online algorithms to solve np-hard problems more efficiently in practice
TLDR
It is shown that the online algorithm and its offline counterpart can be used to improve the performance of state-of-the-art solvers in a number of problem domains, including Boolean satisfiability, zero-one integer programming, constraint satisfaction, and theorem proving.
Optimal Schedules for Parallelizing Anytime Algorithms: The Case of Shared Resources
TLDR
This paper presents a methodology for designing an optimal scheduling policy based on the statistical characteristics of the algorithms involved, and formally analyzes the case where the processes share resources (a single-processor model), and provides an algorithm for optimal scheduling.
Runtime Distributions and Criteria for Restarts
Randomized algorithms sometimes employ a restart strategy. After a certain number of steps, the current computation is aborted and restarted with a new, independent random seed. In some cases, this
LEAPSANDBOUNDS: A Method for Approximately Optimal Algorithm Configuration
TLDR
It is proved that the capped expected runtime of the configuration returned by LeapsAndBounds is close to the optimal expected runtime, while the algorithm's running time is near-optimal.
Theoretical results on bet-and-run as an initialisation strategy
TLDR
It is shown that bet-and-run strategies with non-trivial k and t1 are necessary to find the global optimum efficiently and that the choice of t1 is linked to properties of the function.
Strategies for Solving SAT in Grids by Randomized Search
TLDR
A novel strategy of using grid to solve collections of hard instances of the propositional satisfiability problem (SAT) with a randomized SAT solver run in a Grid, which aims at decreasing the overall solution time by applying an alternating distribution schedule.
Estimating parallel runtimes for randomized algorithms in constraint solving
TLDR
A framework to estimate the parallel performance of a given algorithm by analyzing the runtime behavior of its sequential version by approximating the runtime distribution of the sequential process with statistical methods is proposed.
Improving the run time of the (1 + 1) evolutionary algorithm with luby sequences
TLDR
This paper explores the benefits of combining the simple (1 + 1) Evolutionary Algorithm with the Luby Universal Strategy with the (1 - 1) EAu, a meta-heuristic that does not require parameter tuning and serves as an Efficient Polynomial-time Approximation Scheme for the Partition Problem.
...
...

References

OR-Parallel Theorem Proving with Random Competition
TLDR
This work can prove high efficiency (compared with other parallel theorem provers) of random competition on highly parallel architectures with thousands of processors on which no communication between the processors is necessary during run-time.