How to Tell When Simpler, More Unified, or Less Ad Hoc Theories will Provide More Accurate Predictions

@article{Forster1994HowTT,
  title={How to Tell When Simpler, More Unified, or Less Ad Hoc Theories will Provide More Accurate Predictions},
  author={Malcolm R. Forster and Elliott Sober},
  journal={The British Journal for the Philosophy of Science},
  year={1994},
  volume={45},
  pages={1 - 35}
}
  • M. Forster, E. Sober
  • Published 1 March 1994
  • Computer Science
  • The British Journal for the Philosophy of Science
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light on the theoretical… 
Simplicity is not Truth-Indicative
TLDR
It is argued that, in general, where the evidence supports two theories equally, the simpler theory is not more likely to be true and is notlikely to be nearer the truth and Occam's razor eliminates itself.
Non-bayesian foundations for statistical estimation, prediction, and the ravens example
The paper provides a formal proof that efficient estimates of parameters, which vary as as little as possible when measurements are repeated, may be expected to provide more accurate predictions. The
The Curve Fitting Problem: A Bayesian Rejoinder
In the curve fitting problem two conflicting desiderata, simplicity and goodness-of-fit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayes's theorem
The Problem of Underdetermination in Model Selection
TLDR
A new paradigm for inference-oriented model selection is proposed that evaluates models on the basis of a trade-off between model fit and model plausibility by comparing the fits of sequentially nested models to derive an empirical lower bound for the subjective plausibility of assumptions.
Comment: The Inferential Information Criterion from a Bayesian Point of View
  • O. Vassend
  • Computer Science
    Sociological Methodology
  • 2018
1. The Bayesian information criterion (BIC) has been proposed as a way to carry out Bayesian hypothesis testing when there are no clear expectations. However, the BIC rests on a particular prior
BAYESIAN OCKHAM’S RAZOR AND NESTED MODELS
  • B. Autzen
  • Computer Science
    Economics and Philosophy
  • 2019
TLDR
This paper will discuss a problem that results when Bayesian Ockham’s razor is applied to nested economic models and argue that previous responses to the problem are unsatisfactory and develop a novel reply.
Model Selection in Science: The Problem of Language Variance
  • M. Forster
  • Mathematics
    The British Journal for the Philosophy of Science
  • 1999
Recent solutions to the curve-fitting problem, described in Forster and Sober ([1995]), trade off the simplicity and fit of hypotheses by defining simplicity as the paucity of adjustable parameters.
How To Remove the Ad Hoc Features of Statistical Inference within a Frequentist Paradigm
Our aim is to develop a frequentist theory of decision-making. The resulting unification of the seemingly unrelated theories of hypothesis testing and parameter estimation is based on a new
Simplicity, Inference and Modelling: What is the problem of simplicity?
The problem of simplicity involves three questions: How is the simplicity of a hypothesis to be measured? How is the use of simplicity as a guide to hypothesis choice to be justified? And how is
Coherence, Explanation, and Hypothesis Selection
  • D. H. Glass
  • Computer Science
    The British Journal for the Philosophy of Science
  • 2021
TLDR
By overcoming some of the problems with the previous approach, this work provides a more adequate defence of IBE and suggests that IBE not only tracks truth but also has practical advantages over the previous approaches.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 108 REFERENCES
Non-bayesian foundations for statistical estimation, prediction, and the ravens example
The paper provides a formal proof that efficient estimates of parameters, which vary as as little as possible when measurements are repeated, may be expected to provide more accurate predictions. The
Ockham's Razor and Bayesian Analysis
'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian
Ockham’s Razor
William of Ockham’s razor principle, that the simplest theory to fit noisy data closely should be preferred, is expressible quantitatively in Bayesian terms. There is a trade-off between simplicity
Logical versus Historical Theories of Confirmation
  • A. Musgrave
  • Philosophy
    The British Journal for the Philosophy of Science
  • 1974
Thales predicted an eclipse, and became one of the Seven Sages. Since then many have urged that a scientific theory is to be especially prized if it yields successful predictions. For example,
The Curve Fitting Problem: A Solution1
  • Peter D. Turney
  • Mathematics
    The British Journal for the Philosophy of Science
  • 1990
Much of scientific inference involves fitting numerical data with a curve, or functional relation. The received view is that the fittest curve is the curve which best balances the conflicting demands
Scientific Reasoning: The Bayesian Approach
TLDR
This new edition of Howson and Urbach's account of scientific method from the Bayesian standpoint includes chapter exercises and extended material on topics such as regression analysis, distributions densities, randomisation and conditionalisation.
How the Laws of Physics Lie.
Nancy Cartwright argues for a novel conception of the role of fundamental scientific laws in modern natural science. If we attend closely to the manner in which theoretical laws figure in the
Reconstructing the Past: Parsimony, Evolution, and Inference
TLDR
Elliott Sober builds a general framework for understanding the circumstances in which parsimony makes sense as a tool of phylogenetic inference, and provides a detailed critique of parsimony in the biological literature, exploring the strengths and limitations of both statistical and nonstatistical cladistic arguments.
A Realistic Theory of Science.
It is the main contention of Cliff Hooker's new book that a realistic theory of science, a theory which gives a good account of what actually goes on in science, should be based on realism. Many
...
1
2
3
4
5
...