Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self‐Administered Surveys

@article{Berinsky2014SeparatingTS,
  title={Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self‐Administered Surveys},
  author={Adam J. Berinsky and Michele F. Margolis and Michael W. Sances},
  journal={American Journal of Political Science},
  year={2014},
  volume={58},
  pages={739-753}
}
Good survey and experimental research requires subjects to pay attention to questions and treatments, but many subjects do not. In this article, we discuss “Screeners” as a potential solution to this problem. We first demonstrate Screeners’ power to reveal inattentive respondents and reduce noise. We then examine important but understudied questions about Screeners. We show that using a single Screener is not the most effective way to improve data quality. Instead, we recommend using multiple… Expand

Figures and Tables from this paper

Using screeners to measure respondent attention on self-administered surveys: Which items and how many?
Abstract Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way toExpand
Can we turn shirkers into workers
Abstract Survey researchers increasingly employ attention checks to identify inattentive respondents and reduce noise. Once inattentive respondents are identified, however, researchers must decideExpand
Paying Attention to Inattentive Survey Respondents
Does attentiveness matter in survey responses? Do more attentive survey participants give higher quality responses? Using data from a recent online survey that identified inattentive respondentsExpand
Attention Check Items and Instructions in Online Surveys with Incentivized and Non-Incentivized Samples: Boon or Bane for Data Quality?
In this paper, we examine rates of careless responding and reactions to detection methods (i.e., attention check items and instructions) in an experimental setting based on two different samples.Expand
Attention Check Items and Instructions in Online Surveys with Incentivized and Non-Incentivized Samples: Boon or Bane for Data Quality?
In this paper, we examine rates of careless responding and reactions towards detection methods (i.e., attention check item and instruction) in an experimental setting based on two different samples.Expand
Using Instructed Response Items as Attention Checks in Web Surveys: Properties and Implementation
TLDR
This article provides evidence that IRIs identify respondents who show an elevated use of straightlining, speeding, item nonresponse, inconsistent answers, and implausible statements throughout a survey and suggests that respondents’ inattentiveness partially changes as the context in which they complete the survey changes. Expand
Do Attempts to Improve Respondent Attention Increase Social Desirability Bias
In response to concerns about survey satisficing, social scientists have started to include “instructional manipulation checks” (IMCs) in lab- and survey-based questionnaires. IMCs are effective atExpand
Did You Miss Something? Inattentive Respondents in Discrete Choice Experiments
TLDR
A modeling framework to simultaneously address preference, scale and attribute processing heterogeneity is developed and finds that when it considers attribute non-attendance—scale differences disappear, which suggests that the type of heterogeneity detected in a model could be the result of un-modeled heterogeneity of a different kind. Expand
‘Short is Better’. Evaluating the Attentiveness of Online Respondents Through Screener Questions in a Real Survey Environment
In online surveys, the control of respondents is almost absent: for this reason, the use of screener questions or “screeners” has been suggested to evaluate respondent attention. Screeners askExpand
Accounting for Noncompliance in Survey Experiments
Abstract Political scientists commonly use survey experiments–often conducted online–to study the attitudes of the mass public. In these experiments, compensation is usually small and researcherExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 32 REFERENCES
Do Attempts to Improve Respondent Attention Increase Social Desirability Bias
In response to concerns about survey satisficing, social scientists have started to include “instructional manipulation checks” (IMCs) in lab- and survey-based questionnaires. IMCs are effective atExpand
Sensitive questions in surveys.
TLDR
The article reviews the research done by survey methodologists on reporting errors in surveys on sensitive topics, noting parallels and differences from the psychological literature on social desirability. Expand
Completion Time and Response Order Effects in Web Surveys
The use of the World Wide Web to conduct surveys has grown rapidly over the past decade, raising concerns regarding data quality, questionnaire design, and sample representativeness. This researchExpand
Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk
TLDR
It is shown that respondents recruited in this manner are often more representative of the U.S. population than in-person convenience samples but less representative than subjects in Internet-based panels or national probability samples. Expand
Response Time Effort: A New Measure of Examinee Motivation in Computer-Based Tests
When low-stakes assessments are administered, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. ThisExpand
Detecting and Deterring Insufficient Effort Responding to Surveys
PurposeResponses provided by unmotivated survey participants in a careless, haphazard, or random fashion can threaten the quality of data in psychological and organizational research. The purpose ofExpand
The Strength of Issues: Using Multiple Measures to Gauge Preference Stability, Ideological Constraint, and Issue Voting
A venerable supposition of American survey research is that the vast majority of voters have incoherent and unstable preferences about political issues, which in turn have little impact on voteExpand
Identifying careless responses in survey data.
TLDR
Recommendations include using identified rather than anonymous responses, incorporating instructed response items before data collection, as well as computing consistency indices and multivariate outlier analysis to ensure high-quality data. Expand
An Application of Item Response Time: The Effort‐Moderated IRT Model
The validity of inferences based on achievement test scores is dependent on the amount of effort that examinees put forth while taking the test. With low-stakes tests, for which this problem isExpand
Short-Term Communication Effects or Longstanding Dispositions? The Public’s Response to the Financial Crisis of 2008
Economic interests and party identification are two key, long-standing factors that shape people’s attitudes on government policy. Recent research has increasingly focused on how short-termExpand
...
1
2
3
4
...