Proposal for the 2016 ANES Time Series. Quantitative Predictions of State and National Election Outcomes

Similar documents
Lab 3: Logistic regression models

The Case of the Disappearing Bias: A 2014 Update to the Gerrymandering or Geography Debate

Forecasting the 2012 U.S. Presidential Election: Should we Have Known Obama Would Win All Along?

Income Inequality as a Political Issue: Does it Matter?

UC Davis UC Davis Previously Published Works

14.11: Experiments in Political Science

Michael W. Sances Curriculum Vitae August 16, 2018

What is The Probability Your Vote will Make a Difference?

1. A Republican edge in terms of self-described interest in the election. 2. Lower levels of self-described interest among younger and Latino

Statistics, Politics, and Policy

Turnout and Strength of Habits

Predicting Elections from the Most Important Issue: A Test of the Take-the-Best Heuristic

The Job of President and the Jobs Model Forecast: Obama for '08?

Changing Votes or Changing Voters? How Candidates and Election Context Swing Voters and Mobilize the Base. Electoral Studies 2017

Experimental Evidence about Whether (and Why) Electoral Closeness Affects Turnout

Should the Democrats move to the left on economic policy?

Quantitative Prediction of Electoral Vote for United States Presidential Election in 2016

Bias Correction by Sub-population Weighting for the 2016 United States Presidential Election

Statewide Survey on Job Approval of President Donald Trump

Who Really Voted for Obama in 2008 and 2012?

Conor M. Dowling Assistant Professor University of Mississippi Department of Political Science

Ohio State University

Requiring individuals to show photo identification in

Segal and Howard also constructed a social liberalism score (see Segal & Howard 1999).

The Cook Political Report / LSU Manship School Midterm Election Poll

Marc Meredith. Updated: 9/4/2017

Appendices for Elections and the Regression-Discontinuity Design: Lessons from Close U.S. House Races,

a Henry Salvatori Fellow, Alfred is the House? Predicting Presidential

Forecasting the 2018 Midterm Election using National Polls and District Information

Midterm Elections Used to Gauge President s Reelection Chances

Online Appendix. Pivotality and Turnout: Evidence from a Field Experiment in the Aftermath of a Tied Election. Ryan D. Enos and Anthony Fowler

Paul M. Sommers Alyssa A. Chong Monica B. Ralston And Andrew C. Waxman. March 2010 MIDDLEBURY COLLEGE ECONOMICS DISCUSSION PAPER NO.

Who Votes for the Future? Information, Expectations, and Endogeneity in Economic Voting

LESSONS LEARNED FROM THE 2016 ELECTION

SCATTERGRAMS: ANSWERS AND DISCUSSION

Experiments in Election Reform: Voter Perceptions of Campaigns Under Preferential and Plurality Voting

Proposal for 2016 ANES Pilot: Keywords: Partisan polarization; social distance; political parties

Can Raising the Stakes of Election Outcomes Increase Participation? Results from a Large-Scale Field Experiment in Local Elections

NH Statewide Horserace Poll

SHOULD THE DEMOCRATS MOVE TO THE LEFT ON ECONOMIC POLICY? By Andrew Gelman and Cexun Jeffrey Cai Columbia University

Online Appendix: Robustness Tests and Migration. Means

The Case of the Disappearing Bias: A 2014 Update to the Gerrymandering or Geography Debate

A positive correlation between turnout and plurality does not refute the rational voter model

AP PHOTO/MATT VOLZ. Voter Trends in A Final Examination. By Rob Griffin, Ruy Teixeira, and John Halpin November 2017

ISERP Working Paper 06-10

American Voters and Elections

LAUTENBERG SUBSTITUTION REVIVES DEMOCRATS CHANCES EVEN WHILE ENERGIZING REPUBLICANS

Turnout and the New American Majority

Friends of Democracy Corps and Greenberg Quinlan Rosner Research. Stan Greenberg and James Carville, Democracy Corps

Colorado 2014: Comparisons of Predicted and Actual Turnout

PENNSYLVANIA: CD01 INCUMBENT POPULAR, BUT RACE IS CLOSE

Learning from Small Subsamples without Cherry Picking: The Case of Non-Citizen Registration and Voting

Supplementary Materials for Strategic Abstention in Proportional Representation Systems (Evidence from Multiple Countries)

A Journal of Public Opinion & Political Strategy. Missing Voters in the 2012 Election: Not so white, not so Republican

Ai, C. and E. Norton Interaction Terms in Logit and Probit Models. Economic Letters

Supporting Information for Differential Registration Bias in Voter File Data: A Sensitivity Analysis Approach

Notes. Abstract. Voting as an act of contribution. MELVIN J. HINICH* Virginia Polytechnic Institute and State University

Changes in Party Identification among U.S. Adult Catholics in CARA Polls, % 48% 39% 41% 38% 30% 37% 31%

Analysis: Impact of Personal Characteristics on Candidate Support

RBS SAMPLING FOR EFFICIENT AND ACCURATE TARGETING OF TRUE VOTERS

Polling and Politics. Josh Clinton Abby and Jon Winkelried Chair Vanderbilt University

Supplementary Materials A: Figures for All 7 Surveys Figure S1-A: Distribution of Predicted Probabilities of Voting in Primary Elections

Mecro-Economic Voting: Local Information and Micro-Perceptions of the Macro-Economy

Drew Kurlowski University of Missouri Columbia

IOWA: TRUMP HAS SLIGHT EDGE OVER CLINTON

Unequal Recovery, Labor Market Polarization, Race, and 2016 U.S. Presidential Election. Maoyong Fan and Anita Alves Pena 1

The California Primary and Redistricting

Close Contests and Future Voter Turnout

A Behavioral Measure of the Enthusiasm Gap in American Elections

UC Berkeley California Journal of Politics and Policy

Spring 2011; 3/4 credits

A Dead Heat and the Electoral College

Comment on Voter Identification Laws and the Suppression of Minority Votes

Partisan Advantage and Competitiveness in Illinois Redistricting

Running head: PARTISAN PROCESSING OF POLLING STATISTICS 1

THE NOMINATING PROCESS

THE PRESIDENTIAL RACE AND THE DEBATES October 3-5, 2008

The History of Voting Rights

Percentages of Support for Hillary Clinton by Party ID

Pennsylvania Republicans: Leadership and the Fiscal Cliff

ANES Panel Study Proposal Voter Turnout and the Electoral College 1. Voter Turnout and Electoral College Attitudes. Gregory D.

Developing Political Preferences: Citizen Self-Interest

Alan Stoga Senior Associate at Kissinger Associates. United States presidential elections 2016 Post debates Surveys Perspectives

Old Dominion University / Virginian Pilot Poll #3 June 2012

Modeling Political Information Transmission as a Game of Telephone

Partisan Nation: The Rise of Affective Partisan Polarization in the American Electorate

Why The National Popular Vote Bill Is Not A Good Choice

Who Punishes Extremist Nominees? Candidate Ideology and Turning Out the Base in U.S. Elections

Iowa Voting Series, Paper 6: An Examination of Iowa Absentee Voting Since 2000

Forecasting Elections: Voter Intentions versus Expectations *

Keep it Clean? How Negative Campaigns Affect Voter Turnout

Probabilistic Polling and Voting in the 2008 Presidential Election: Evidence from the American Life Panel

Retrospective Voting

CALTECH/MIT VOTING TECHNOLOGY PROJECT A

Following the Leader: The Impact of Presidential Campaign Visits on Legislative Support for the President's Policy Preferences

Latino Voters in the 2008 Presidential Election:

Following Through on an Intention to Vote: Present Bias, Norms, and Turnout

Report for the Associated Press: Illinois and Georgia Election Studies in November 2014

FORECASTING THE 2012 ELECTION WITH THE FISCAL MODEL. Alfred G. Cuzán

Minnesota State Politics: Battles Over Constitution and State House

Transcription:

Proposal for the 2016 ANES Time Series Quantitative Predictions of State and National Election Outcomes Keywords: Election predictions, motivated reasoning, natural experiments, citizen competence, measurement Can ordinary citizens predict election outcomes? This question is of growing importance in political science for several reasons. First, given declining response rates and increased use of mobile phones, scholars are increasingly looking to crowd-sourced predictions as a supplement for more traditional forecasting methods (Zukin 2015; Graefe 2014). Second, expectations of election outcomes have been used to gain insight into partisan motivated reasoning (Thibodeau et al. 2015; Daniller et al. 2013; Enos and Hersh 2015). Third, given the increasing use of elections as natural experiments, it is important to gauge how unexpected these outcomes truly are through the use of pre-election forecasts (Snowberg et al. 2007; Gerber and Huber 2010; Caughey and Sekhon 2011). Despite the importance of measuring citizen election predictions, traditional measures such as those included on the ANES surveys since the 1950s have been shown to be suboptimal for answering research questions about prediction, motivated reasoning, and the effects of elections. In recent work, Quek and Sances (2015) compare a qualitative measure of election forecasts who citizens think will win the election to a quantitative measure the vote share that citizens believe each candidate will receive. Building on work by Ansolabehere et al. (2013), Quek and Sances argue that asking about vote shares yields a more precise measure of the theoretical quantity of interest. They then show that, as expected, the vote share measure performs better in numerous empirical applications. We propose that the 2016 ANES Time Series include the quantitative measure of election predictions used by Quek and Sances (2015) in their study of the 2012 U.S. presidential election. The original question used by these scholars asks about vote share in the national election. As an extension of the original study, we also propose to ask a question about expected vote shares of candidates in respondent s state and county. We therefore request a total of three survey items, 1

detailed below. Adding these quantitative prediction measures will enhance the ability of ANES data to answer research questions about prediction, motivated reasoning, partisan bias in economic expectations, and the effect of electoral closeness on turnout. Additionally, the presence of these items on the ANES, which also includes measures of media consumption and social networks, will allow us to better assess the determinants of election predictions. Several existing studies have shown that ordinary voters are surprisingly accurate when it comes to predicting elections, and including these items on the ANES can help to explain the causes of this accuracy. Measurement Existing ANES Items Since 1952 and including in the 2012 Time Series Study, 1 the ANES has asked the following two items: Who do you think will be elected President in November? <Democratic Candidate Name> <Republican Candidate Name> What about here in <state>? Which candidate for President do you think will carry this state? <Democratic Candidate Name> <Republican Candidate Name> While these measures have yielded a powerful time series of election predictions that has already been employed by researchers (e.g., Rothschild and Wolfers 2013; Graefe 2014), they are limited 1 Response options may have changed slightly over the years. We refer to the wordings from the 2012 Time Series in this proposal. 2

in that they only ask respondents about who will win, and not by how much. In the framework of Ansolabehere et al. (2013), the existing measure is a qualitative measure of a quantitative theoretical construct. Existing work on election predictions highlights the problems this causes. For instance, to translate the responses to vote shares, researchers must impose additional functional form assumptions to make the binary response continuous (Lewis-Beck and Tien 1999). Similarly, some scholars have sought to use over-confidence in election predictions as evidence of partisan motivated reasoning (Thibodeau et al. 2015; Daniller et al. 2013). Yet the binary prediction measure is difficult to interpret in this regard: 90% of Republicans may believe their candidate will win, but their over-confidence would be better measured if we knew by how much they think their candidate would win. Proposed Items for the 2016 Time Series In recent work, Quek and Sances (2015) report the results of an online survey fielded in the 2012 presidential election and administered by the firm Survey Sampling International. On this survey, Quek and Sances asked not only about expectations of who would win, but also about expectations of vote share. We propose to include the following variants of this question on the 2016 survey: 1. Thinking about only the votes cast for the two major parties, what percentage of the vote do you think <Democratic Candidate Name> and <Republican Candidate Name> will each receive in the NATIONAL VOTE? <Democratic Candidate Name> % <Republican Candidate Name> % 2. Thinking about only the votes cast for the two major parties, what percentage of the vote do you think <Democratic Candidate Name> and <Republican Candidate Name> will each receive in YOUR STATE? <Democratic Candidate Name> % 3

<Republican Candidate Name> % 3. Thinking about only the votes cast for the two major parties, what percentage of the vote do you think <Democratic Candidate Name> and <Republican Candidate Name> will each receive in YOUR COUNTY? <Democratic Candidate Name> % <Republican Candidate Name> % As in the 2012 study, respondents options will be constrained to sum to 100. Validation of Proposed Measures In this section we present evidence for the validity of the vote share prediction measure from Quek and Sances (2015). The data come from an online sample of about 2,700 U.S. respondents in October 2012. Figure 1 below shows the distribution of vote share predictions, first for the entire sample and then by partisan subgroup. This figure shows that among the full sample, predictions are approximately normally distributed around the actual vote share result of 51.9% for Barack Obama. On average, respondents were off by only about one percentage point. The remaining panels of the figure show that prediction bias was lowest among independents (1.5 points overestimating for Obama), six points over for Democrats, and five points under for Republicans. (By way of comparison, expert forecasts published in the October 2012 issue of PS: Political Science and Politics had an average error of 1.7% (Campbell 2012), and the October 31, 2012, Iowa Electronic Market had an error of 1.4%.) 4

Figure 1: Figure 2: 5

Quek and Sances (2015) also show that the vote share prediction performs better than alternative measures. Figure 2 below compares the average Obama vote share obtained using three questions: who respondents intended to vote for; who respondents predicted would win; and our vote share prediction question. This figure shows that while the binary prediction measure yields more accurate forecasts than the intention measure, the vote share measure is both more accurate, as well as more precise, as evidenced in the tighter confidence interval. The vote share measure not only performs better in terms of forecasting elections, but is also is better suited as a measurement of electoral surprise. In a study of the effect of partisanship on economic perceptions, Gerber and Huber (2010) show that election outcomes change how Republicans and Democrats evaluate the future economy: if a voter s favored candidate wins (loses), then that voter expects the economy to do better (worse) in the future. Quek and Sances (2015) show that prediction measures can be used to enhance this effect. They compare how economic perceptions change before and after the 2012 election between Democrats and Republicans, as well as how the partisan differential changes as a function of electoral expectations. As shown in Table 1 below, while Republicans became more negative about the economy after the election (columns 1 and 2), this effect was higher for Republicans who were surprised that is, who believed Obama would lose (columns 3 and 4). The interaction effects indicate that the most surprised Republicans shifted about 0.6 (on a five-point scale) more in economic expectations than the least surprised. Yet the final two columns show that the vote share prediction measure brings this effect into sharper relief: the interactions are over three times as high (all predictors are coded such that zero (one) is the sample minimum (maximum)). Justification for Proposed Items Based on the results in Quek and Sances (2015), we believe the inclusion of the national vote share prediction (question 1 above) is warranted given the widespread use of election predictions to answer numerous research questions, and the demonstrated superiority of the vote share measure as compared to the traditional binary measure. Further, including the vote share measure only adds 6

Table 1: additional information, and does not lose any information: researchers who are interested in who respondents thought would win can easily transform the vote share measure into a binary measure. However, the reverse is not possible. In addition to the national vote share prediction, we are requesting questions that ask about respondents expected vote share in their state (question 2) and county (question 3). We have several reasons for adding these items. First, the ANES has paired its traditional national election prediction measure with a question asking about who respondents believe will win in their state. While existing research has compared the binary and continuous measures using predictions of 7

national elections, no study has made this comparison at the state level. Second, asking about predictions at more refined levels of geography will help researchers address an open question: just why do respondents do so well at predicting elections (Rothschild and Wolfers 2013; Quek and Sances 2015). One explanation, proposed by Rothschild and Wolfers, is that respondents pool information from their immediate social networks when answering prediction questions. While plausible, this explanation has yet to be tested; indeed, an equally plausible but also untested explanation is that respondents form their predictions as a weighted average of their own vote intent and what they have heard about polls in the media. If respondents are better at predicting state and county outcomes levels of geography where respondents know more people personally, but where polls are less likely to be publicized then this is informative about the causes for accuracy in election predictions. Third, asking about vote share predictions at lower levels of geography is informative for studying the effect of perceived pivotality on turnout (Aldrich 1976; Enos and Fowler 2014). According to the calculus of voting (Riker and Ordeshook 1968), turnout should be increasing in the perceived closeness of the outcome. While the ANES has in the past asked about perceived closeness, they have done so using a qualitative measure with just two response options. 2 By employing a quantitative measure of closeness, researchers will be better able to evaluate whether perceptions of electoral closeness matter for voter turnout. Finally, the longitudinal nature of the ANES Time Series will allow researchers to test how respondents adjust their expectations in response to campaign events and, in turn, how shifting expectations regarding candidate viability affect vote choice. 2 For instance, in the 2012 Time Series, the item was worded Do you think the Presidential race will be CLOSE here in <state> or will one candidate win by quite a bit? The response options were Will be close or Win by quite a bit. 8

References Aldrich, John H. 1976. Some problems in testing two rational models of participation. American Journal of Political Science 20(4): 713-733. Ansolabehere, Stephen, Marc Meredith, and Erik Snowberg. 2013. Asking about numbers: Why and how. Political Analysis 21(1): 48-69. Campbell, James E. 2012. Forecasting the 2012 American national elections. PS: Political Science & Politics 45(4): 610-613. Caughey, Devin, and Jasjeet S. Sekhon. 2011. Elections and the regression discontinuity design: Lessons from close US house races, 1942 2008. Political Analysis 19(4): 385-408. Daniller, Andrew M., Laura Silver, and Devra Coren Moehler. 2013. Calling it wrong: Partisan media effects on electoral expectations and institutional trust. Paper presented at the 2013 American Political Science Association Annual Meeting. Accessed January 31, 2016 via http: //papers.ssrn.com/sol3/papers.cfm?abstract_id=2301154. Enos, Ryan D., and Anthony Fowler. 2014. Pivotality and turnout: Evidence from a field experiment in the aftermath of a tied election. Political Science Research and Methods 2(2): 309-319. Enos, Ryan D., and Eitan D. Hersh. 2015. Campaign Perceptions of Electoral Closeness: Uncertainty, Fear, and Overconfidence. British Journal of Political Science, in press. Gerber, Alan S., and Gregory A. Huber. 2010. Partisanship, political control, and economic assessments. American Journal of Political Science 54(1): 153-173. Graefe, Andreas. 2014. Accuracy of vote expectation surveys in forecasting elections. Public Opinion Quarterly 78: 204 32. Lewis-Beck, Michael S., and Charles Tien. 1999. Voters as forecasters: a micromodel of election prediction. International Journal of Forecasting 15(2): 175-184. Quek, Kai, and Michael W. Sances. 2015. Closeness Counts: Increasing Precision and Reducing Errors in Mass Election Predictions. With Kai Quek. Political Analysis 23(4): 518-533. Riker, William H., and Peter C. Ordeshook. 1968. A Theory of the Calculus of Voting. American 9

Political Science Review 62(1): 25-42. Rothschild, David M., and Justin Wolfers. 2013. Forecasting elections: Voter intentions versus expectations. Working paper, University of Michigan Department of Economics. Accessed January 31, 2016 via http://users.nber.org/~jwolfers/papers/voterexpectations.pdf. Snowberg, Erik, Justin Wolfers, and Eric Zitzewitz. 2007. Partisan Impacts on the Economy: Evidence from Prediction Markets and Close Elections. Quarterly Journal of Economics 122(2): 807-829. Thibodeau, Paul, Matthew M. Peebles, Daniel J. Grodner, and Frank H. Durgin. 2015. The wished-for always wins until the winner was inevitable all along: Motivated reasoning and belief bias regulate emotion during elections. Political Psychology 36: 431 48. Zukin, Cliff. 2015. What s the matter with polling? New York Times (June 21). Accessed January 31, 2016 via http://www.nytimes.com/2015/06/21/opinion/sunday/whats-the-matter-wit h-polling.html. 10