Making the Most Of Statistical Analyses: Improving Interpretation and Presentation

@article{King2000MakingTM,
  title={Making the Most Of Statistical Analyses: Improving Interpretation and Presentation},
  author={Gary King and Michael Tomz and Jason. Wittenberg},
  journal={PSN: Computational Models (Games) (Topic)},
  year={2000}
}
Social Scientists rarely take full advantage of the information available in their statistical results. As a consequence, they miss opportunities to present quantities that are of greatest substantive interest for their research and express the appropriate degree of certainty about these quantities. In this article, we offer an approach, built on the technique of statistical simulation, to extract the currently overlooked information from any statistical method and to interpret and present it… 
Using Graphs Instead of Tables to Improve the Presentation of Empirical Results in Political Science
When political scientists present empirical results, they are much more likely to use tables rather than graphs, despite the fact that the latter greatly increases the clarity of presentation and
Clarify: Software for Interpreting and Presenting Statistical Results
Clarify is a program that uses Monte Carlo simulation to convert the raw output of statistical procedures into results that are of direct interest to researchers, without changing statistical
Using Graphs Instead of Tables in Political Science
When political scientists present empirical results, they are much more likely to use tables than graphs, despite the fact that graphs greatly increases the clarity of presentation and makes it
A Unified Approach to Measurement Error and Missing Data: Overview and Applications
TLDR
This work generalizes the popular multiple imputation framework by treating missing data problems as a limiting special case of extreme measurement error and corrects for both.
Listwise Deletion is Evil: What to Do About Missing Data in Political Science
TLDR
This paper adapts an existing algorithm, and uses it to implement a generalpurpose, multiple imputation model for missing data, which is between 65 and 726 times faster than the leading method recommended in the statistics literature and is very easy to use.
Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference
TLDR
A unified approach is proposed that makes it possible for researchers to preprocess data with matching and then to apply the best parametric techniques they would have used anyway and this procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.
Analyzing Incomplete Political Science Data: An Alternative Algorithm for Multiple Imputation
TLDR
This work adapts an algorithm and uses it to implement a general-purpose, multiple imputation model for missing data that is considerably faster and easier to use than the leading method recommended in the statistics literature.
Improving Present Practices in the Visual Display of Interactions
TLDR
Here, simulated examples of the conditions under which visual displays may lead to inappropriate inferences are provided and open-source software that provides optimized utilities for analyzing and visualizing interactions in psychology is introduced.
Interpreting Zelig: Everyone’s Statistical Software
TLDR
A new version of Zelig is introduced that has been written using R’s Reference Classes and makes the generalized information matrix test available for all appropriate models, and it integrates with R libraries for multiple imputation, counterfactual analysis, and causal inference.
Substantive Importance and the Veil of Statistical Significance
Abstract Political science is gradually moving away from an exclusive focus on statistical significance and toward an emphasis on the magnitude and importance of effects. While we welcome this
...
...

References

SHOWING 1-10 OF 47 REFERENCES
Bootstrap Statistical Inference: Examples and Evaluations for Political Science
Theory: Bootstrapping is a nonparametric approach to statistical inference that relies on large amounts of computation rather than mathematical analysis and distributional assumptions of traditional
Resampling: The new statistics
TLDR
The word " big " in the first sentence above is purposely vague, because there are many possible kinds of estimates that one might wish to make concerning a given object or collection.
Causal Inferences, Closed Populations, and Measures of Association
  • H. Blalock
  • Psychology
    American Political Science Review
  • 1967
Two of the most important traditions of quantitative research in sociology and social psychology are those of survey research and laboratory or field experiments. In the former, the explicit
Markov Chain Monte Carlo in Practice: A Roundtable Discussion
TLDR
Advice and guidance is offered to novice users of MCMC to help them build confidence in simulation results, methods for speeding and assessing convergence, estimating standard error, and more.
Markov Chain Monte Carlo conver-gence diagnostics: a comparative review
TLDR
All of the methods in this work can fail to detect the sorts of convergence failure that they were designed to identify, so a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence are recommended.
Unifying Political Methodology: The Likelihood Theory of Statistical Inference
One of the hallmarks of the development of political science as a discipline has been the creation of new methodologies by scholars within the discipline--methodologies that are well-suited to the
A Statistical Model for Multiparty Electoral Data
We propose a comprehensive statistical model for analyzing multiparty, district-level elections. This model, which provides a tool for comparative politics research analogous to that which regression
ESTIMATING THE EXPECTED PREDICTIVE ACCURACY OF ECONOMETRIC MODELS BY RAY C . FAIRI
A method is proposed in this paper for estimating the uncertainty of a forecast from an econometric model. The method accounts for the four main sources of uncertainty: uncertainty due to (1) the
Logistic Regression in Rare Events Data
TLDR
It is shown that more efficient sampling designs exist for making valid inferences, such as sampling all available events and a tiny fraction of nonevents, which enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables.
Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution
What is the relative importance of structural versus contextual forces in the birth and death of scientific theories? We describe a formal dynamic model of the birth, evolution, and death of
...
...