• Corpus ID: 226222287

Bias-Corrected Crosswise Estimators for Sensitive Inquiries

  title={Bias-Corrected Crosswise Estimators for Sensitive Inquiries},
  author={Yuki Atsusaka and Randolph T. Stevenson},
  journal={arXiv: Methodology},
The crosswise model is an increasingly popular survey technique to elicit candid answers from respondents on sensitive questions. We demonstrate, however, that the conventional crosswise estimator for the population prevalence of sensitive attributes is biased toward 0.5 in the presence of inattentive respondents who randomly choose their answers under this design. We propose a simple design-based bias correction procedure and show that our bias-corrected estimator can be easily implemented… 
1 Citations
Functionality of the Crosswise Model for Assessing Sensitive or Transgressive Behavior: A Systematic Review and Meta-Analysis
This study systematically reviewed and meta-analyzed empirical applications of CM and addressed a gap for quality assessment of indirect estimation models and indicates that CM outperforms DQ on the “more is better” validation criterion, and increasingly so with higher behavior sensitivity.


Sensitive Question Techniques and Careless Responding: Adjusting the Crosswise Model for Random Answers
Methods to adjust the crosswise model for self-reported random answers are developed and results from an exploratory online survey show that fewer respondents report random answers than might be expected given unadjusted results.
Statistical Analysis of List Experiments
The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet eliciting truthful answers in surveys is challenging, especially when studying sensitive
When to Worry about Sensitivity Bias: A Social Reference Theory and Evidence from 30 Years of List Experiments
Eliciting honest answers to sensitive questions is frustrated if subjects withhold the truth for fear that others will judge or punish them. The resulting bias is commonly referred to as social
Developing Standards for Post-Hoc Weighting in Population-Based Survey Experiments
It is argued that all survey experiments should report the sample average treatment effect (SATE), and researchers seeking to generalize to a broader population can weight to estimate the population average Treatment effect (PATE), but should discuss the construction and application of weights in a detailed and transparent manner given the possibility that weighting can introduce bias.
An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions
When studying sensitive issues, including corruption, prejudice, and sexual behavior, researchers have increasingly relied upon indirect questioning techniques to mitigate such known problems of
List Experiments with Measurement Error
This article demonstrates that the nonlinear least squares regression (NLSreg) estimator proposed in Imai (2011) is robust to nonstrategic measurement error, and proposes new estimators that preserve the statistical efficiency of MLreg while improving robustness.
Using the Predicted Responses from List Experiments as Explanatory Variables in Regression Models
The list experiment, also known as the item count technique, is becoming increasingly popular as a survey methodology for eliciting truthful responses to sensitive questions. Recently, multivariate
Uncovering a Blind Spot in Sensitive Question Research: False Positives Undermine the Crosswise-Model RRT
A comparative validation design is presented that is able to detect false positives without the need for an individual-level validation criterion — which is often unavailable.
Design and Analysis of the Randomized Response Technique
This article reviews standard designs available to applied researchers, develops various multivariate regression techniques for substantive analyses, proposes power analyses to help improve research designs, and presents new robust designs that are based on less stringent assumptions than those of the standard designs.