We present the results of an experiment examining the extent to which individuals will tolerate delays when told that such delays are for security purposes. In our experiment, we asked 800 Amazon Mechanical Turk users to count the total number of times a certain term was repeated in a multipage document. The task was designed to be conducive to cheating. We assigned subjects to eight between-subjects conditions: one of these offered a concrete security reason (virus-scanning) for the delay, another offered only a vague security explanation, while the remaining conditions either offered non-security explanations for the delay or no delay at all—in the case of the control condition. We found that subjects were significantly more likely to cheat or abandon the task when provided with non-security explanations or a vague security explanation for the delay. However, when subjects were provided more explanation about the threat model and the protection ensured by the delay, they were not more likely to cheat than subjects in the control condition who faced no such delay. Our results thus contribute to the nascent literature on soft paternalistic solutions to security and privacy problems by suggesting that, when security mitigations cannot be made “free” for users, designers may incentivize compliant users’ behavior by intentionally drawing attention to the mitigation itself.