# Induction, Popper, and machine learning

@article{Nielson2021InductionPA, title={Induction, Popper, and machine learning}, author={Bruce Nielson and Daniel C. Elton}, journal={ArXiv}, year={2021}, volume={abs/2110.00840} }

Francis Bacon popularized the idea that science is based on a process of induction by which repeated observations are, in some unspecified way, generalized to theories based on the assumption that the future resembles the past. This idea was criticized by Hume and others as untenable leading to the famous problem of induction. It wasn’t until the work of Karl Popper that this problem was solved, by demonstrating that induction is not the basis for science and that the development of scientific…

## One Citation

### Fe b 20 22 Program Synthesis for the OEIS ⋆ ( system description )

- 2022

## References

SHOWING 1-10 OF 15 REFERENCES

### Applying Deutsch’s concept of good explanations to artificial intelligence and neuroscience – An initial exploration

- Computer ScienceCognitive Systems Research
- 2021

### Open Problems in Universal Induction & Intelligence

- Computer ScienceAlgorithms
- 2009

The state of the art in universal induction and its extension to universal intelligence is described and the state-of-the-art is described only in passing and the reader is referred to the literature.

### Philosophy and the practice of Bayesian statistics.

- EconomicsThe British journal of mathematical and statistical psychology
- 2013

It is argued that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism.

### Holes in Bayesian statistics

- PhilosophyJournal of Physics G: Nuclear and Particle Physics
- 2020

Every philosophy has holes, and it is the responsibility of proponents of a philosophy to point out these problems. Here are a few holes in Bayesian data analysis: (1) the usual rules of conditional…

### Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

- Computer ScienceICML
- 2016

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

### Was the Watchmaker Blind? Or Was She One-Eyed?

- PsychologyBiology
- 2017

Evolution may have no foresight, but it is at least partially directed by organisms themselves and by the populations of which they form part, and similar arguments support partial direction in the evolution of behavior.

### What’s Hidden in a Randomly Weighted Neural Network?

- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020

It is empirically show that as randomly weighted neural networks with fixed weights grow wider and deeper, an ``untrained subnetwork" approaches a network with learned weights in accuracy.

### From Blind to Creative: In Defense of Donald Campbell's Selectionist Theory of Human Creativity

- Philosophy, Psychology
- 1998

Both Perkins and Sternberg recognize Campbell's selectionist theory of knowledge generation as accounting for certain types of learning and human creativity. However, they argue that his theory…

### Embedded Agency

- Computer ScienceEncyclopedia of Creativity, Invention, Innovation and Entrepreneurship
- 2020

This work provides an informal survey of obstacles to formalizing good reasoning for agents embedded in their environment that must optimize an environment that is not of type ``function''.

### The Beginning of Infinity: Explanations That Transform the World

- Economics
- 2012

Are you looking to uncover the beginning of infinity explanations that transform the world Digitalbook. Correct here it is possible to locate as well as download the beginning of infinity…