• Corpus ID: 238259969

Induction, Popper, and machine learning

@article{Nielson2021InductionPA,
  title={Induction, Popper, and machine learning},
  author={Bruce Nielson and Daniel C. Elton},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.00840}
}
Francis Bacon popularized the idea that science is based on a process of induction by which repeated observations are, in some unspecified way, generalized to theories based on the assumption that the future resembles the past. This idea was criticized by Hume and others as untenable leading to the famous problem of induction. It wasn’t until the work of Karl Popper that this problem was solved, by demonstrating that induction is not the basis for science and that the development of scientific… 
1 Citations

Tables from this paper

Fe b 20 22 Program Synthesis for the OEIS ⋆ ( system description )

  • 2022

References

SHOWING 1-10 OF 15 REFERENCES

Open Problems in Universal Induction & Intelligence

The state of the art in universal induction and its extension to universal intelligence is described and the state-of-the-art is described only in passing and the reader is referred to the literature.

Philosophy and the practice of Bayesian statistics.

  • A. GelmanC. Shalizi
  • Economics
    The British journal of mathematical and statistical psychology
  • 2013
It is argued that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism.

Holes in Bayesian statistics

Every philosophy has holes, and it is the responsibility of proponents of a philosophy to point out these problems. Here are a few holes in Bayesian data analysis: (1) the usual rules of conditional

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Was the Watchmaker Blind? Or Was She One-Eyed?

Evolution may have no foresight, but it is at least partially directed by organisms themselves and by the populations of which they form part, and similar arguments support partial direction in the evolution of behavior.

What’s Hidden in a Randomly Weighted Neural Network?

It is empirically show that as randomly weighted neural networks with fixed weights grow wider and deeper, an ``untrained subnetwork" approaches a network with learned weights in accuracy.

From Blind to Creative: In Defense of Donald Campbell's Selectionist Theory of Human Creativity

Both Perkins and Sternberg recognize Campbell's selectionist theory of knowledge generation as accounting for certain types of learning and human creativity. However, they argue that his theory

Embedded Agency

This work provides an informal survey of obstacles to formalizing good reasoning for agents embedded in their environment that must optimize an environment that is not of type ``function''.

The Beginning of Infinity: Explanations That Transform the World

Are you looking to uncover the beginning of infinity explanations that transform the world Digitalbook. Correct here it is possible to locate as well as download the beginning of infinity