Many studies of media effects use self-reported news exposure as their key independent variable without establishing its validity. Motivated by anecdotal evidence that people’s reports of their own media use can differ considerably from independent assessments, this study examines systematically the accuracy of survey-based self-reports of news exposure. I compare survey estimates to Nielsen estimates, which do not rely on self-reports. Results show severe overreporting of news exposure. Survey estimates of network news exposure follow trends in Nielsen ratings relatively well, but exaggerate exposure by a factor of 3 on average and as much as eightfold for some demographics. It follows that apparent media effects may arise not because of differences in exposure, but because of unknown differences in the accuracy of reporting exposure. How much and what kind of news people watch matters, according to the very large literature on media effects. Exposure to political information is thought to influence how much people know about politics, how they feel and think about politics, and whether they participate in politics. Although other factors, such as attention during exposure (Chaffee and Schleuder 1986; Chang and Krosnick 2002), may condition the effect of political messages, the causal chain starts with exposure, and exposure appears to be consequential even when media users pay little attention (Krugman and Hartley 1970; Zukin and Snyder 1984). The large majority of research into media effects relies on self-reported exposure. Yet, there is evidence to doubt the validity of these self-reports. Several studies show differences between frequency reports, time diary entries, and direct observation of media users (Bechtel, Achepohl, and Akers 1972; Robinson 1985; Papper, Holmes, and Popovich 2004). In one study, for example, 35 percent of the respondents reported listening to NPR, while Arbitron MARKUS PRIOR is with the Woodrow Wilson School and the Department of Politics at Princeton University, Princeton, NJ 08544-1013, USA. Address correspondence to Markus Prior; e-mail: firstname.lastname@example.org. doi:10.1093/poq/nfp002 Advance Access publication March 18, 2009 C © The Author 2009. Published by Oxford University Press on behalf of the American Association for Public Opinion Research. All rights reserved. For permissions, please e-mail: email@example.com Bias in News Exposure 131 ratings suggested that only 6 percent of the population did so (Price and Zaller 1993). Self-reports of many nonpolitical behaviors are also known to be biased. Americans overreport ATM withdrawals (Burton and Blair 1991) and church attendance (Hadaway, Marler, and Chaves 1993, 1998; Presser and Stinson 1998). Soccer players overreport how often they head the ball (Rutherford and Fernie 2005). And either men overreport the number of their sex partners, or women underreport it (Brown and Sinclair 1999). Considering the prominence of self-reported news exposure in public opinion research, the lack of validation despite these warning signals is troubling. This article reports a systematic validation of self-reported news exposure against an independent benchmark, audience ratings. I assess the extent of overreporting in self-reported exposure to evening network news by comparing survey estimates to Nielsen estimates. I then determine if we can treat reporting errors as random (and therefore relatively benign) by comparing overreporting in different demographic groups. Finally, I discuss the threat invalid self-reports pose for survey-based studies that use news exposure as independent and dependent variables.