Crytographic limitations on learning Boolean formulae and finite automata

  title={Crytographic limitations on learning Boolean formulae and finite automata},
  author={Michael Kearns and Leslie G. Valiant},
  booktitle={Symposium on the Theory of Computing},
  • M. KearnsL. Valiant
  • Published in
    Symposium on the Theory of…
    1 February 1989
  • Computer Science, Mathematics
In this paper we consider the problem of learning from examples classes of functions when there are no restrictions on the allowed hypotheses other than that they are polynomial time evaluatable. We prove that for Boolean formulae, finite automata, and constant depth threshold circuits (simplified neural nets), this problem is computationally as difficult as the quadratic residue problem, inverting the RSA function and factoring Blum integers (composite number p q where p and q are both primes… 

Complexity Theoretic Limitations on Learning DNF's

It is shown that under a natural assumption on the complexity of refuting random K-SAT formulas, learning DNF formulas is hard, and the same assumption implies the hardness of learning intersections of $\omega(\log(n))$ halfspaces, agnostically learning conjunctions, as well as virtually all (distribution free) learning problems that were previously shown hard.

From average case complexity to improper learning complexity

A new technique for proving hardness of improper learning, based on reductions from problems that are hard on average, is introduced, and a (fairly strong) generalization of Feige's assumption about the complexity of refuting random constraint satisfaction problems is put forward.

A Real generalization of discrete AdaBoost

Real Boosting a la Carte with an Application to Boosting Oblique Decision Tree

This paper unify a well-known top-down decision tree induction algorithm due to Kearns and Mansour, and discrete AdaBoost, as two versions of a same higher-level boosting algorithm, which may be used as the basic building block to devise simple provable boosting algorithms for complex classifiers.

New Results for Learning Noisy Parities and Halfspaces

The first nontrivial algorithm for learning parities with adversarial noise is given, which shows that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and that majorities of halfspaces are hard to PAC-learn using any representation.

Hardness Results for Learning First-Order Representations and Programming by Demonstration

It is shown that solving this “dual” DFA learning problem is hard, under cryptographic assumptions, which implies the hardness of several other more natural learning problems, including learning the description logic CLASSSIC from subconcepts, and learning arity-two “determinate” function-free Prolog clauses from ground clauses.

The Strength of Weak Learnability

In this paper, a method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy, and it is shown that these two notions of learnability are equivalent.

Inductive Inference, DFAs, and Computational Complexity

The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory.

Efficient learning of typical finite automata from random walks

The main contribution of this paper is in presenting the first efficient algorithms for learning nontrivial classes of automata in an entirely passive learning model.

A Survey of Ensemble Learning: Concepts, Algorithms, Applications, and Prospects

An attempt is made to concisely cover the three main ensemble methods: bagging, boosting, and stacking, their early development to the recent state-of-the-art algorithms, and their mathematical and algorithmic representations.



An O(n0.4)-approximation algorithm for 3-coloring

A polynomial-time algorithm to color any 3-colorable n-node graph with O(n) colors improves the best previously known bound of O (√n/√log) by reducing the number of colors needed to color a 3- colorable graph.

Fast probabilistic algorithms for hamiltonian circuits and matchings

It is shown that for each problem there is an algorithm that is extremely fast, and which with probability tending to one finds a solution in randomly chosen graphs of sufficient density, and the results contrast with the known NP-completeness of the first two problems.

Complexity of Automaton Identification from Given Data

  • E. M. Gold
  • Computer Science, Mathematics
    Inf. Control.
  • 1978

Log Depth Circuits for Division and Related Problems

This work presents optimal depth Boolean circuits for integer division, powering, and multiple products and describes an algorithm for testing divisibility that is optimal for both depth and space.

Digital signatures and public key functions as intractable as factoring

  • M.I.T. Laboratory for Computer Science, technical report number TM-212
  • 1979

An d(n(] ‘)-approximation algorlthm for 3-coloring

  • Proceedings of the 21sr ACM Symposuon on the Theow of Computmg
  • 1989

Lecture notes on the complexity of some problems in number theory

  • Lecture notes on the complexity of some problems in number theory
  • 1982

Log depth cu’cults for dwlslon and related problems

  • SIAM J. Comput. 15, 4 (1986), 994-1003.
  • 1986

A theory of the learnable

This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.

On the Markov Chain Simulation Method for Uniform Combinatorial Distributions and Simulated Annealing

  • D. Aldous
  • Mathematics
    Probability in the Engineering and Informational Sciences
  • 1987
Uniform distributions on complicated combinatorial sets can be simulated by the Markov chain method. A condition is given for the simulations to be accurate in polynomial time. Similar analysis of