We give the first algorithm that (under distributional assumptions) efficiently learns halfspaces in the notoriously difficult agnostic framework of Kearns, Schapire, & Sellie, where a learner is… Expand

This paper shows that for a broad class of convex potential functions, any such boosting algorithm is highly susceptible to random classification noise, and there is a simple data set of examples which is efficiently learnable by such a booster if there is no noise, but which cannot be learned to accuracy better than 1/2 if there are random classification noises.Expand

A new smooth boosting algorithm is described which generates only smooth distributions which do not assign too much weight to any single example and is used to construct malicious noise tolerant versions of the PAC-model p-norm linear threshold learning algorithms described by Servedio (2002).Expand

We show that for low-density parity-check (LDPC) codes whose Tanner graphs have sufficient expansion, the linear programming (LP) decoder of Feldman, Karger, and Wainwright can correct a constant… Expand

The algorithm and analysis exploit new structural properties of Boolean functions and obtain the first polynomial factor improvement on the naive n-k time bound which can be achieved via exhaustive search.Expand

A very easy proof that the randomized query complexity of nontrivial monotone graph properties is at least/spl Omega/(v/sup 4/3//p/sup 1/3/), where v is the number of vertices and p /spl les/ 1/2 is the critical threshold probability.Expand

The problem of making a linear network code secure is equivalent to the problem of finding a linear code with certain generalized distance properties, and it is shown that if the authors give up a small amount of overall capacity, then a random code achieves these properties using a much smaller field size than the construction of Cai & Yeung.Expand

This work shows that any linear threshold function f is specified to within error ε by estimates of its Chow parameters (degree 0 and 1 Fourier coefficients) which are accurate to within an additive, and gives the first polynomial bound on the number of examples required for learning linear threshold functions in the “restricted focus of attention” framework.Expand

Gaussian surface area essentially characterizes the computational complexity of learning under the Gaussian distribution, and this is the first subexponential time algorithm for learning general convex sets even in the noise-free (PAC) model.Expand