Baris Aydinlioglu

• computational complexity
• 2012
In several settings, derandomization is known to follow from circuit lower bounds that themselves are equivalent to the existence of pseudorandom generators. This leaves open the question whether derandomization implies the circuit lower bounds that are known to imply it, i.e., whether the ability to derandomize in any way implies the ability to do so in(More)
• Electronic Colloquium on Computational Complexity
• 2010
We present an alternate proof of the recent result by Gutfreund and Kawachi that derandomizing Arthur-Merlin games into P implies linear-exponential circuit lower bounds for E. Our proof is simpler and yields stronger results. In particular, consider the promise-AM problem of distinguishing between the case where a given Boolean circuit C accepts at least a(More)
In this framework, a learning algorithm has off-line access to what we can call a “training set”. This set consists of 〈element, value〉 pairs, where each element belongs to the domain and value is the concept evaluated on that element. We say that the algorithm can learn the concept if, when we execute it on the training set, it outputs a hypothesis that is(More)
• Las Vegas Algorithms: These refer to the randomized algorithms that always come up with a/the correct answer. Their “expected” running time is polynomial in the size of their input, which means that the average running time over all possible coin tosses is polynomial. In the worst case, however, a Las Vegas algorithm may take exponentially long. One(More)
the inequality follows from Hölder’s inequality: E [ fg] ≤ ∥f ∥∥ p ∥g ∥∥ q , if 1p + 1 q = 1 with p, q ≥ 1. If α = ±1 then (1) fails unless p = q or f is constant in absolute value. This follows because (T±1f)(x) = f(±x), where −x denotes x with all its bits flipped, and because the only functions f for which ∥f ∥∥ p = ∥f ∥∥ q for p 6= q are those that are(More)
• Electronic Colloquium on Computational Complexity
• 2016
We strengthen existing evidence for the so-called “algebrization barrier”. Algebrization — short for algebraic relativization — was introduced by Aaronson and Wigderson (AW) (STOC 2008) in order to characterize proofs involving arithmetization, simulation, and other “current techniques”. However, unlike relativization, eligible statements under this notion(More)
The basic idea of local search is to find a good local optima quickly enough to approximate the global optima. This idea is somewhat like hill climbing. Starting from an arbitrary point, we try to go down or climb up a little step and compare the objective value at the new point to that at the old point. We then choose the step that increase the objective(More)
• 1