• Corpus ID: 88515621

An optimal $(\epsilon,\delta)$-approximation scheme for the mean of random variables with bounded relative variance

@article{Huber2017AnO,
  title={An optimal \$(\epsilon,\delta)\$-approximation scheme for the mean of random variables with bounded relative variance},
  author={Mark L. Huber},
  journal={arXiv: Computation},
  year={2017}
}
  • M. Huber
  • Published 5 June 2017
  • Computer Science, Mathematics
  • arXiv: Computation
Randomized approximation algorithms for many #P-complete problems (such as the partition function of a Gibbs distribution, the volume of a convex body, the permanent of a $\{0,1\}$-matrix, and many others) reduce to creating random variables $X_1,X_2,\ldots$ with finite mean $\mu$ and standard deviation$\sigma$ such that $\mu$ is the solution for the problem input, and the relative standard deviation $|\sigma/\mu| \leq c$ for known $c$. Under these circumstances, it is known that the number of… 
3 Citations

Figures from this paper

Robust Estimation of the Mean with Bounded Relative Standard Deviation
  • M. Huber
  • Computer Science, Mathematics
    Springer Proceedings in Mathematics & Statistics
  • 2020
TLDR
This work considers how to best implement a randomized approximation algorithm for approximating the number of satisfying assignments to 2-SAT or DNF, the volume of a convex body, and the partition function of a Gibbs distribution.
Halving the Bounds for the Markov, Chebyshev, and Chernoff Inequalities Using Smoothing
TLDR
It is shown, through a simple smoothing using auxiliary randomness, that each of the Markov, Chebyshev, and Chernoff inequalities can be cut in half.

References

SHOWING 1-10 OF 22 REFERENCES
Approximation algorithms for the normalizing constant of Gibbs distributions
  • M. Huber
  • Computer Science, Mathematics
  • 2015
TLDR
This work presents a new method for approximating the partition function to a specified level of relative accuracy using only a number of samples, that is, $O(ln(Z(\beta))\ln(\ln( Z(\beta))))$ when $Z(0)\geq1$.
Random walks and an O * ( n 5 ) volume algorithm for convex bodies
TLDR
This algorithm introduces three new ideas: the use of the isotropic position (or at least an approximation of it) for rounding; the separation of global obstructions and local obstructions for fast mixing; and a stepwise interlacing of rounding and sampling.
The Markov chain Monte Carlo method: an approach to approximate counting and integration
TLDR
The introduction of analytical tools with the aim of permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization.
Random Generation of Combinatorial Structures from a Uniform Distribution
Counting linear extensions
We survey the problem of counting the number of linear extensions of a partially ordered set. We show that this problem is #P-complete, settling a long-standing open question. This result is
Distributed Statistical Estimation and Rates of Convergence in Normal Approximation
TLDR
It is shown that one of the key benefits of the divide-and-conquer strategy is robustness, an important characteristic for large distributed systems, and connections between performance of these distributed algorithms and the rates of convergence in normal approximation are established.
Convex Rank Tests and Semigraphoids
TLDR
The methods refine existing rank tests of nonparametric statistics, such as the sign test and the runs test, and are useful for exploratory analysis of ordinal data and of particular interest are graphical tests, which correspond to both graphical models and to graph associahedra.
A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries
TLDR
A polynomial-time randomized algorithm for estimating the permanent of an arbitrary n × n matrix with nonnegative entries computes an approximation that is within arbitrarily small specified relative error of the true value of the permanent.
Sub-Gaussian mean estimators
TLDR
Estimators with a sub-Gaussian behavior even for certain heavy-tailed distributions are defined and various impossibility results for mean estimators are proved.
Challenging the empirical mean and empirical variance: a deviation study
We present new M-estimators of the mean and variance of real valued random variables, based on PAC-Bayes bounds. We analyze the non-asymptotic minimax properties of the deviations of those estimators
...
...