• Corpus ID: 219531142

# Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Connections to Evolvability

@article{Chen2020ClassificationUM,
title={Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Connections to Evolvability},
author={Sitan Chen and Frederic Koehler and Ankur Moitra and Morris Yau},
journal={ArXiv},
year={2020},
volume={abs/2006.04787}
}
• Published 8 June 2020
• Computer Science
• ArXiv

### Distribution-Independent PAC Learning of Halfspaces with Massart Noise

• Computer Science
NeurIPS
• 2019
No efficient weak (distribution-independent) learner was known in this model, even for the class of disjunctions, so there is evidence that improving on the error guarantee of the algorithm might be computationally hard.

### Polynomial regression under arbitrary product distributions

• Computer Science, Mathematics
Machine Learning
• 2010
A very simple proof that threshold functions over arbitrary product spaces have δ-noise sensitivity $O(\sqrt{\delta})$, resolving an open problem suggested by Peres (2004).

### Learning Halfspaces with Massart Noise Under Structured Distributions

• Computer Science, Mathematics
COLT 2020
• 2020
This work identifies a smooth {\em non-convex} surrogate loss with the property that any approximate stationary point of this loss defines a halfspace that is close to the target halfspace, and can be used to solve the underlying learning problem.

### A Polynomial-Time Algorithm for Learning Noisy Linear Threshold Functions

• Computer Science, Mathematics
Algorithmica
• 1998
It is shown how simple greedy methods can be used to find weak hypotheses (hypotheses that correctly classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter.

### Distribution-Independent Evolvability of Linear Threshold Functions

This paper presents a proof that linear threshold functions having a nonnegligible margin on the data points are evolvable distribution-independently via a simple mutation algorithm and shows that the answer is negative.

### On Basing Lower-Bounds for Learning on Worst-Case Assumptions

• Computer Science
2008 49th Annual IEEE Symposium on Foundations of Computer Science
• 2008
It is proved that if alanguage L reduces to the task of improper learning of circuits, then, depending on the type of the reduction in use, either L has a statistical zero-knowledge argument system, or the worst-case hardness of L implies the existence of a weak variant of one-way functions defined by Ostrovsky-Wigderson (ISTCS '93).