Worst-Case Vs. Algorithmic Average-Case Complexity in the Polynomial-Time Hierarchy
@inproceedings{Gutfreund2006WorstCaseVA, title={Worst-Case Vs. Algorithmic Average-Case Complexity in the Polynomial-Time Hierarchy}, author={Dan Gutfreund}, booktitle={APPROX-RANDOM}, year={2006} }
We show that for every integer k>1, if Σk, the k'th level of the polynomial-time hierarchy, is worst-case hard for probabilistic polynomial-time algorithms, then there is a language L ∈Σk such that for every probabilistic polynomial-time algorithm that attempts to decide it, there is a samplable distribution over the instances of L, on which the algorithm errs with probability at least 1/2–1/poly(n) (where the probability is over the choice of instances and the randomness of the algorithm). In…
7 Citations
If NP Languages are Hard on the Worst-Case, Then it is Easy to Find Their Hard Instances
- Computer Science, Mathematics20th Annual IEEE Conference on Computational Complexity (CCC'05)
- 2005
It is shown that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case).
If NP Languages are Hard on the Worst-Case Then It is Easy to Find Their Hard Instances
- Computer Science, MathematicsComputational Complexity Conference
- 2005
It is shown that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easy in the worst-case).
Improving on Gutfreund, Shaltiel, and Ta-Shma's Paper "If NP Languages Are Hard on the Worst-Case, Then It Is Easy to Find Their Hard Instances"
- Mathematics, Computer ScienceCSR
- 2011
This paper shows how to increase the error probability to 1/3 − e, which is the maximal possible 1/2 − e on a random formula chosen with respect to that distribution.
Strong Hardness Preserving Reduction from a P-Samplable Distribution to the Uniform Distribution for NP-Search Problems
- MathematicsElectron. Colloquium Comput. Complex.
- 2009
The Impagliazzo-Levin reduction is proposed, which shows that the average-case NPhardness under any polynomial-time samplable distribution is essentially equivalent to the one under the uniform distribution.
Indistinguishability by Adaptive Procedures with Advice, and Lower Bounds on Hardness Amplification Proofs
- Computer Science, Mathematics2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS)
- 2018
The results prove 15-year-old conjectures by Viola, improve on three incomparable previous works, and prove the lower bound q=Ω(log(1/δ)/ε) for "error-less" hardness amplification proofs, and for direct-product lemmas.
Is it possible to improve Yao's XOR lemma using reductions that exploit the efficiency of their oracle?
- Mathematics, Computer ScienceElectron. Colloquium Comput. Complex.
- 2020
The technique imitates the previous lower bounds for black-box reductions, replacing the inefficient oracle used in that proof, with an efficient one that is based on limited independence, and developing tools to deal with the technical difficulties that arise following this replacement.
Black-Box Uselessness: Composing Separations in Cryptography
- Computer Science, MathematicsIACR Cryptol. ePrint Arch.
- 2021
It is shown that known lower bounds for assumptions behind black-box constructions of indistinguishability obfuscation can be extended to derive blackbox uselessness of a variety of primitives for obtaining (approximately correct) IO.
References
SHOWING 1-10 OF 32 REFERENCES
If NP Languages are Hard on the Worst-Case, Then it is Easy to Find Their Hard Instances
- Computer Science, Mathematics20th Annual IEEE Conference on Computational Complexity (CCC'05)
- 2005
It is shown that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case).
On Worst-Case to Average-Case Reductions for NP Problems
- Computer ScienceElectron. Colloquium Comput. Complex.
- 2005
We show that if an NP-complete problem has a non-adaptive self-corrector with respect to a distribution that can be sampled then coNP is contained in AM/poly and the polynomial hierarchy collapses to…
On uniform amplification of hardness in NP
- Mathematics, Computer ScienceSTOC '05
- 2005
It is proved that if every problem in NP admits an efficient uniform algorithm that (averaged over random inputs and over the internal coin tosses of the algorithm) succeeds with probability at least 1 ⁄ 2 +1 (log n )α, then for every problemIn NP there is an efficient Uniform Algorithm that succeeds with probabilities at least1 - 1 poly(n).
On the theory of average case complexity
- Mathematics, Computer ScienceSTOC '89
- 1989
The present authors widen the scope to other basic questions in computational complexity to include the equivalence of search and decision problems in the context of average case complexity and an initial analysis of the structure of distributional-NP under reductions which preserve average polynomial-time.
Average Case Complete Problems
- Computer Science, MathematicsSIAM J. Comput.
- 1986
It is shown in [1] that the Tiling problem with uniform distribution of instances has no polynominal “on average” algorithm, unless every NP-problem with every simple probability distribution has it.
Hard-core distributions for somewhat hard problems
- Mathematics, Computer ScienceProceedings of IEEE 36th Annual Foundations of Computer Science
- 1995
It is shown that for any decision problem that cannot be 1-/spl delta/ approximated by circuits of a given size, there is a specific "hard core" set of inputs which is at least a /splDelta/ fraction of all inputs and on which no circuit of a slightly smaller size can get even a small advantage over a random guess.
Easiness assumptions and hardness tests: trading time for zero error
- Computer Science, MathematicsProceedings 15th Annual IEEE Conference on Computational Complexity
- 2000
It is proved that every RP algorithm can be simulated by a zero-error probabilistic algorithm that appears correct infinitely often and that if ZPP is somewhat easy, then RP is subexponentially easy in the uniform setting described above.
New connections between derandomization, worst-case complexity and average-case complexity
- Computer ScienceElectron. Colloquium Comput. Complex.
- 2006
It is shown that a mild derandomization assumption together with the worst-case hardness of NP implies the average- case hardness of a language in non-deterministic quasi-polynomial time and that black-box techniques cannot prove such results.
Hardness vs. randomness within alternating time
- Computer Science18th IEEE Annual Conference on Computational Complexity, 2003. Proceedings.
- 2003
A tight lower bound is proved on black-box worst-case hardness amplification, which is the problem of producing an average-case hard function starting from a worst- case hard one, by showing that constant depth circuits cannot compute extractors and list-decodable codes.
A personal view of average-case complexity
- Computer Science, MathematicsProceedings of Structure in Complexity Theory. Tenth Annual IEEE Conference
- 1995
The paper attempts to summarize the state of knowledge in this area, including some "folklore" results that have not explicitly appeared in print, and tries to standardize and unify definitions.