• Publications
  • Influence
On genetic algorithms
TLDR
C Culling is near optimal for this problem, highly noise tolerant, and the best known a~~roach in some regimes, and some new large deviation bounds on this submartingale enable us to determine the running time of the algorithm.
What Size Net Gives Valid Generalization?
TLDR
It is shown that if m O(W/ ∊ log N/∊) random examples can be loaded on a feedforward network of linear threshold functions with N nodes and W weights, so that at least a fraction 1 ∊/2 of the examples are correctly classified, then one has confidence approaching certainty that the network will correctly classify a fraction 2 ∊ of future test examples drawn from the same distribution.
On the capabilities of multilayer perceptrons
  • E. Baum
  • Computer Science
    J. Complex.
  • 1 September 1988
What is thought?
In What Is Thought? Eric Baum proposes a computational explanation of thought. Just as Erwin Schrodinger in his classic 1944 work What Is Life? argued ten years before the discovery of DNA that life
Neural net algorithms that learn in polynomial time from examples and queries
  • E. Baum
  • Computer Science, Mathematics
    IEEE Trans. Neural Networks
  • 1991
TLDR
The author's algorithm is proved to PAC learn in polynomial time the class of target functions defined by layered, depth two, threshold nets having n inputs connected to k hidden threshold units connected to one or more output units, provided k=/<4.
Building an associative memory vastly larger than the brain.
  • E. Baum
  • Medicine, Biology
    Science
  • 28 April 1995
Prevalence of herpesvirus-like sequence in Kaposi's sarcoma (KS) tissue and blood of KS patients. Studies were conducted as described in text (2). Some of the tumor tissues and corresponding controls
Constructing Hidden Units Using Examples and Queries
TLDR
Empirical tests show that the method can also learn far more complicated functions such as randomly generated networks with 200 hidden units, and requires only 30 minutes of CPU time to learn 200-bit parity to 99.7% accuracy.
Supervised Learning of Probability Distributions by Neural Networks
We propose that the back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the
...
...