# Cryptographic Hardness of Learning Halfspaces with Massart Noise

@article{Diakonikolas2022CryptographicHO, title={Cryptographic Hardness of Learning Halfspaces with Massart Noise}, author={Ilias Diakonikolas and Daniel M. Kane and Pasin Manurangsi and Lisheng Ren}, journal={ArXiv}, year={2022}, volume={abs/2207.14266} }

We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples ( x , y ) ∈ R N × {± 1 } , where the distribution of x is arbitrary and the label y is a Massart corruption of f ( x ), for an unknown halfspace f : R N → {± 1 } , with ﬂipping probability η ( x ) ≤ η < 1 / 2. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the ﬁrst computational hardness result for this learning…

## 3 Citations

### Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems

- Computer Science, Mathematics
- 2022

This work gives the first hardness of improperly learning halfspaces in the agnostic model based on a worst-case complexity assumption, inspired by a sequence of recent works showing hardness of learning well-separated Gaussian mixtures based on worst- case lattice problems.

### SQ Lower Bounds for Learning Single Neurons with Massart Noise

- Computer ScienceArXiv
- 2022

A novel SQ-hard construction for learning {± 1 } -weight Massart halfspaces on the Boolean hypercube that is interesting on its own right is constructed.

### Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

- Computer Science, MathematicsIACR Cryptol. ePrint Arch.
- 2022

A key technical tool is a reduction from classical LWE to LWE with k -sparse secrets where the multiplicative increase in the noise is only O ( √ k ) , independent of the ambient dimension n .

## References

SHOWING 1-10 OF 44 REFERENCES

### Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise

- Computer ScienceCOLT
- 2022

It is shown that no efﬁcient SQ algorithm for learning Massart halfspaces on R d can achieve error better than Ω( η ) , even if OPT = 2 − log c ( d ) , for any universal constant c ∈ (0, 1) .

### Complexity theoretic limitations on learning halfspaces

- Computer Science, MathematicsSTOC
- 2016

It is shown that no efficient learning algorithm has non-trivial worst-case performance even under the guarantees that Err_H(D) <= eta for arbitrarily small constant eta>0, and that D is supported in the Boolean cube.

### Hardness of Learning Halfspaces with Noise

- Computer Science, Mathematics2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)
- 2006

It is proved that even a tiny amount of worst-case noise makes the problem of learning halfspaces intractable in a strong sense, and a strong hardness is obtained for another basic computational problem: solving a linear system over the rationals.

### Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise

- Computer ScienceCOLT
- 2022

Tight statistical query lower bounds for learnining halfspaces in the presence of Massart noise are given and it is shown that for arbitrary ∈ [0, 1/2] every SQ algorithm achieving misclassification error better than requires queries of super polynomial accuracy or at least a superpolynomial number of queries.

### Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems

- Computer Science, MathematicsArXiv
- 2022

This work gives the first hardness of improperly learning halfspaces in the agnostic model based on a worst-case complexity assumption, inspired by a sequence of recent works showing hardness of learning well-separated Gaussian mixtures based on worst- case lattice problems.

### Boosting in the Presence of Massart Noise

- Computer ScienceCOLT
- 2021

The main positive result is the ﬁrst computationally eﬃcient boosting algorithm in the presence of Massart noise that achieves misclassiﬁcation error arbitrarily close to η .

### New Results for Learning Noisy Parities and Halfspaces

- Computer Science2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)
- 2006

The first nontrivial algorithm for learning parities with adversarial noise is given, which shows that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and that majorities of halfspaces are hard to PAC-learn using any representation.

### Distribution-Independent PAC Learning of Halfspaces with Massart Noise

- Computer ScienceNeurIPS
- 2019

No efficient weak (distribution-independent) learner was known in this model, even for the class of disjunctions, so there is evidence that improving on the error guarantee of the algorithm might be computationally hard.

### On the Hardness of Learning With Errors with Binary Secrets

- Mathematics, Computer ScienceIACR Cryptol. ePrint Arch.
- 2018

It is proved that the binary-secret LWE distribution is pseudorandom, under standard worst-case complexity assumptions on lattice problems.

### Efficient noise-tolerant learning from statistical queries

- Computer ScienceSTOC
- 1993

This paper formalizes a new but related model of learning from statistical queries, and demonstrates the generality of the statistical query model, showing that practically every class learnable in Valiant’s model and its variants can also be learned in the new model (and thus can be learning in the presence of noise).