#### Filter Results:

#### Publication Year

1990

2016

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

We prove that if a linear error-correcting code C : f0;1g n ! f0;1g m is such that a bit of the message can be probabilistically reconstructed by looking at two entries of a corrupted codeword, then m = 2 (n). We also present several extensions of this result. We show a reduction from the complexity of one-round, information-theoretic Private Information… (More)

We investigate variants of Lloyd's heuristic for clustering high-dimensional data in an attempt to explain its popularity (a half century after its introduction) among practitioners, and in order to suggest improvements in its application. We propose and justify a <i>clusterability</i> criterion for data sets. We present variants of Lloyd's heuristic that… (More)

Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in… (More)

1 Introduction This paper is concerned with the question of whether it is possible to sustain computation in the presence of noise; and if so, at what cost in efficiency and reliability. Shannon in his classical coding theorem showed that data can successfully and efficiently be transmitted in a noisy environment [27]. The outstanding features of the… (More)

We present a fairly general method for nding deterministic constructions obeying what we call k-restrictions; this yields structures of size not much larger than the probabilistic bound. The structures constructed by our method include (n; k)-universal sets (a collection of binary vectors of length n such that for any subset of size k of the indices, all 2… (More)

- William Evans, Claire Kenyon, Yuval Peres, Leonard J Schulman
- 1998

Consider a process in which information is transmitted from a given root node on a noisy tree network T. We start with an unbiased random bit R at the root of the tree, and send it down the edges of T. On every edge the bit can be reversed with probability , and these errors occur independently. The goal is to reconstruct R from the values which arrive at… (More)

We show that, given data from a mixture of k well-separated spherical Gaussians in !R n , a sim ple two-round variant of EM will, with high probability, learn the centers of the Gaussians to near-optimal precision, if the dimension is high (n » log k). We relate this to previous theoreti cal and empirical work on the EM algorithm.

Sampling is an important primitive in probabilistic and quantum algorithms. In the spirit of communication complexity, given a function $f: X \times Y \rightarrow \{0,1\}$ and a probability distribution $D$ over $X \times Y$, we define the sampling complexity of $(f,D)$ as the minimum number of bits Alice and Bob must communicate for Alice to pick $x \in X$… (More)

In this paper we obtain improved upper and lower bounds for the best approximation factor for Sparsest Cut achievable in the cut-matching game framework proposed in Khandekar et al. [9]. We show that this simple framework can be used to design combinatorial algorithms that achieve O(log n) approximation factor and whose running time is dominated by a… (More)

Attempts to find new quantum algorithms that outperform classical computation have fo-cused primarily on the nonabelian hidden subgroup problem, which generalizes the central problem solved by Shor's factoring algorithm. We suggest an alternative generalization, namely to problems of finding hidden nonlinear structures over finite fields. We give examples… (More)