Measuring Total Survey Error: A Dynamic Factor Model Approach

Size: px
Start display at page:

Download "Measuring Total Survey Error: A Dynamic Factor Model Approach"

Transcription

1 Measuring Total Survey Error: A Dynamic Factor Model Approach Jee-Kwang Park Assistant Professor Nazarbayev University jee.park.wws@gmail.com Adam G. Hughes PhD Candidate University of Virginia ahughes@virginia.edu August 5, 2015 Abstract Opinion polls are often inaccurate, but the relative effects of sampling and non-sampling sources of error on poll accuracy remain difficult to discern. However, elections provide a unique opportunity to measure the composition of survey error because underlying, true public opinion is revealed on election day. Using dynamic factor analysis, we examine the survey error structure of 2012 presidential pre-election polls conducted by eleven major polling organizations and estimate the size of sampling and non-sampling errors. We find that a large portion of total survey error is attributable to non-sampling error: across the eleven pre-election polls we examine, bias accounts for about two-thirds of survey error in terms of mean square error. We also show that sources of non-sampling error have a large effect on the variance of polling estimates, which does not decrease with increasing sample size. These results confirm and extend the theoretical arguments of the Total Survey Error (TSE) paradigm. Word Count: 8,355 We thank Bob Shapiro, Michael McDonald, Mark Blumenthal, and audiences at Princeton Methodology Workshop, MPSA 2014 and the PSA Quantitative Methods Network Conference for their comments and suggestions. We accept responsibility for any errors. 1

2 Recent high profile failures in polling have drawn renewed attention to the concept of survey error. Results from the 2015 British general election fell outside of the 95% confidence interval for almost all pre-election poll predictions. In the 2015 Greek bailout referendum, polls predicted a close call, but Nays outnumbered Yeas by 22% instead of the predicted 3 to 4%. As Cliff Zukin, a past president of AAPOR, noted in a New York Times editorial, election polling is in near crisis (2015). When trying to explain these failures, poll observers often focus on the use of unrepresentative nonprobability samples, increasing rates of nonresponse, and individuals widespread adoption of cellular telephones. However, inaccurate predictions are not new phenomena. In the 1992 British general election, polls missed the actual election results by 9% on average, an even larger error than in 2015 (Jowell 1993). In this paper, we explain why polls sometimes miss the mark, far beyond what published margins of error suggest. We provide a novel empirical strategy that estimates the size of non-sampling sources of error. Our central claim is that non-sampling errors outweigh sampling errors in the composition of total survey error; we evaluate this claim by analyzing 2012 U.S. presidential pre-election polls. When explaining poor poll performance, commentators and scholars often focus on nonsampling sources of error, including under-coverage, non-response bias, or insincere respondents (for example Jowell 1993; Crewe 1992; Cowling 2015). But when assessing the accuracy of poll estimates, the same experts usually refer to polls published margins of error. The widespread tendency to focus on sampling error ignores the fact that survey error is composed of non-sampling as well as sampling error. Indeed, survey methodologists have reached a consensus on the fact that much of survey error is attributable to various non-sampling errors, including measurement, under- or over-coverage, non-response, and data processing error (Biemer 2010; Groves et al. 2004; Groves and Lyberg 2010; Weisberg 2005). In extreme cases, systematic error (bias) from non-sampling error sources can be 20 times larger than 2

3 random error, i.e. variance (Parten 1950). Non-sampling error also substantively increases the variance of poll estimates. For example, sample weighting reduces bias at the expense of increased variance (Kalton and Kasprzyk 1986). Evidence for between-interviewer variance is well-documented (Groves 2004, 364; Biemer and Trewin 1997; Biemer 1991). Attempts to correct non-response bias with weights, including non-response weighting, generally results in increased variance (Kish 1992). In the context of pre-election polls, likely voter models inflate the variance of poll estimates: results from the Gallup likely voter model in 2000 pre-election polls were more volatile than those from the registered voter model (Erikson, Panagopoulos, and Wlezien 2004). Finally, interviewer and mode effects (Kish 1962, Groves and Magilavy 1986; Beland and St-Pierre 2008; Hansen, Hurwitz, and Bershad 1960) and non-response errors (Brehm 1993; Berinsky 2004) increase both bias and the variance of poll estimates. As a result, the assumption that poll variance is simply a function of sample size and estimated candidate preferences is both naïve and misguided. The Total Survey Error (TSE) paradigm, which has emerged as the mainstream conceptual framework for understanding survey error, emphasizes the importance of controlling various sources of non-sampling error in polling (Weisberg 2005). But despite TSE s intuitive appeal and theoretical influence, there is little empirical evidence of non-sampling errors relative contribution to total error: existing accounts are based on survey experiments with non-representative samples (Groves and Magilavy 1984; Assael and Keon 1982). In this paper, we estimate the size of survey error for national 2012 pre-election polls and decompose our results into bias and variance. Our empirical results provide strong support for the TSE paradigm: we show that about two-thirds of survey error in 2012 pre-election polls can be explained by bias: namely, non-sampling error. In addition, our results indicate that the variance of poll estimates is not correlated with sample size. This indicates the variance of poll estimates is also substantively affected by non-sampling error. To the best of our knowledge, this study is the first to provide empirical validation of the TSE arguments 3

4 with pre-election polls conducted at the national level. 1 At the same time, since pre-election polls are used as a yardstick in judging the performance of polling organizations and draw concentrated attention from the mass media and the public (Iyengar, Norpoth, and Hahn 2004), our study provides important conclusions for scholars of public opinion, elections, and survey methodology. We also introduce a new statistical approach, the dynamic factor model (DFM), to political science community. To obtain a reliable estimate of survey error and account for polling firms herding behavior, we include non-final election polls in our analysis. Our model uses information from polls conducted throughout the campaign to estimate latent candidate support, which we then compare with poll results. Existing approaches to estimating underlying opinion, including LOWESS and Kalman filtering, rely entirely upon reported sampling error and ignore the effects of non-sampling error on poll variance. By contrast, the DFM measures both the size of bias and variance from poll data without assuming that the variance of poll estimates is pre-determined by sample size. This is possible because pre-election polls collectively constitute a panel data set in which each individual polling firm provides a time series of candidate support. In this panel, each poll is serially correlated and different polls are cross-sectionally correlated. By taking advantage of this double correlation structure in the panel, the DFM estimates the underlying true trend without using sample size as a prior. Instead, the DFM uses Kalman filtering in tandem with MLE or MCMC to estimate latent true values directly from the panel data. Since our sample includes multiple polls from the same polling organizations over time, we generate a distribution of survey errors for each polling firm. We measure the mean and variance of this distribution, which corresponds with 1 Some studies measure the size of a specific type of non-sampling error, including nonresponse error (Brehm 1993, Berinsky 2004, Alvarez and Brehm 2002), interviewer effects (Kish 1962, Groves and Magilavy 1986), or interview mode effects (Beland and St-Pierre 2008, Villanueva 2001). 4

5 the bias and variance of individual poll estimates. Measuring Survey Error in Pre-Election Polls Researchers studying pre-election polls often assess survey error by comparing final polls with actual election results (Crespi 1988; Erikson and Sigelman 1995; Moore and Saad 1997; Panagopoulos 2009; Martin, Traugott, and Kennedy 2005; Traugott 2001; Traugott and Wlezien 2009). These studies generally show that pre-election polls are less accurate than then reported margins of error suggest. For example, Buchanan s (1986) extensive collection of 155 final polls in 68 national elections shows that actual survey error in final polls is twice what the margin of error would indicate. Although this approach is somewhat useful in studying polling accuracy, its usefulness is limited: it cannot discriminate between bias and variance within survey error, and since this comparison involves only final polls, it is not a reliable measure of poll accuracy. Some final polls may be closer to actual election results than others purely by chance. In addition, this approach cannot detect herding behavior: a polling organization can artificially improve its accuracy by adjusting estimates to conform with other polling firms average results before releasing its final poll. For these reasons, other approaches examining size of survey error of polls incorporate non-final polls into the analysis. Here, analysts first aggregate polls conducted during the last several months before the election and then compare the firm-wide average of the polls with the actual results on Election Day (Lau 1994; DeSart and Holbrook 2003; Traugott 2001; reviewed in Pasek 2015). This approach features a fundamental problem: a poll conducted a few weeks before the election is supposed to represent public support on that particular polling day and not on election day. The typical question wording for pre-election polls asks respondents who they would vote for if the election were held today, rather than asking for a forecast of future voting behavior. As a result, these results ought to be compared with 5

6 the public support on the day the poll was in the field, not with election day results. Therefore, in order to use non-final polls to assess overall poll accuracy, we first estimate unobserved true public opinion on each polling day prior to election day. Several approaches with varying levels of methodological sophistication have been proposed for this task. The first, often called a poll of polls, averages across polls to estimate the underlying population parameter: candidate support. For example, Lau (1994) classifies the last month polls of the 1992 presidential election into four sub-groups by polling week. After excluding the poll that is being evaluated, Lau calculates the average of all other polls, weighted by sample size, for the same week and uses the average as an estimate of latent candidate support. Here, survey error is the difference between each poll and the average of all others. The more distant a poll is from the average, the less accurate it is considered to be. This averaging method lacks both a theoretical and empirical justification for why a poll of polls ought to be more accurate than any individual poll. By implicitly assuming that survey error is interchangeable with sampling error, this method ignores non-sampling sources of survey error. Non-sampling error is unlikely to decrease even as sample size increases (Groves et al. 2004, 13), and when non-sampling error is much larger than sampling error, any gains from increased sample size from aggregation will be marginal. And indeed, the empirical analysis of pre-election polls consistently finds that sample size is not significantly related to polling accuracy (Pickup et al. 2011, Arzheimer and Evans 2014). In his extensive examination of pre-election polling accuracy, Crespi (1988) finds that sample size has trivial effects on polling accuracy: Once basic sample size requirements are met, increasing the sample size may make less of a contribution to poll accuracy than other aspects of poll methodology (64). Furthermore, pre-election polls are often collectively biased: as we demonstrate below, most presidential polls in 2012 were biased in favor of Romney. In 1996, polls broadly overestimated public support for Bill Clinton (Ladd 1996; Mitofsky 1998; Silver 2012a). In 2000, nineteen final polls were released by major polling firms: fourteen predicted 6

7 victory for Bush, three a tie, and only two victory for Gore (Traugott 2001). Seventeen out of twenty-three final polls in the 2008 presidential election over-estimated Obama s lead while only three final polls overestimated support for McCain (Panagopoulos 2009). In the 2008 New Hampshire Democratic primary, every poll predicted that Barack Obama would defeat Hillary Clinton, by 1 to 13 percentage points. However, Clinton won the actual vote by 3% (Traugott and Wlezien 2009). 2 When polls are collectively biased, the poll of polls is also biased. To make matters worse, a poll that is close to the true population parameter but distant from other polls would appear to be rather biased and inaccurate than polls close to the (incorrect) average. Local regression, or LOESS (Clinton and Rogers 2013), provides a similar but more computationally sophisticated approach to estimating latent candidate support. Since a smoothed estimate is a weighted average of adjacent polls, all smoothing approaches suffer from the same problems that we identified for the poll of polls in the previous paragraph. Besides, LOESS models decrease the relative weight of outliers when computing averages, since more distant polls are assumed to be less accurate. As we have suggested, outliers may sometimes be the most accurate polls, especially in the context of industry-wide bias. Thus, a purported advantage of this method, robustness to outliers, may unfairly penalize accurate outliers. At the same time, there is no formal theoretical reason to expect a LOESS estimate to be an unbiased and consistent estimate of the underlying population parameter. Kalman filtering is a more appropriate technique for estimating population parameters from polls and used by several political scientists (Green, Gerber, and De Boef 1999; Jackman 2005; Pickup and Johnston 2008). Indeed, this technique (in the form of dynamic linear 2 This failed prediction prompted AAPOR to appoint a committee to review the performance of the polls. Traugott and Wlezien (2009) show that the problems of New Hampshire were not unique; the pre-election polls as a group generally underestimated the winner s share of the voter for the two leading candidates in the week leading up to each election. 7

8 or state space model) is widely used by poll aggregators, including Simon Jackman, Nate Silver, and Mark Blumenthal. 3 Unlike other methods, Kalman filtering provides an unbiased, consistent, and most efficient estimate of the underlying true value from observations when observations have only random error. Since polls are noisy signals, in sense that the true values are distributed with survey errors, Kalman filtering may appear to provide an appropriate means of estimating latent opinion. However, Kalman filtering assumes no bias across observations and requires a priori knowledge of the size of random error. In engineering and the natural sciences, the size of random error may be known via lab experimentation or predefined theoretical expectations. But survey error can be biased and random error cannot be a priori calculated. The only type of survey error that can be calculated a priori is sampling error. Thus, in applying Kalman filter to the study of polls, researchers loosen or violate the assumptions of Kalman filtering. For example, Green, Gerber, and De Boef (1999) use the standard error of simple random sampling to determine size of survey error as if total survey error were equal to sampling error, ignoring both bias and non-sampling random error. Both Jackman (2005) and Pickup and Johnston (2008) assume that the variance of poll estimates in their DLM/state space representation is perfectly specified as a function of sample size, according to the formula, σ 2 i = p i(1 p i ) N i. In Bayesian terminology, sample size is assumed to be a perfect prior for the variance of survey error in this work. But when non-sampling errors increase poll variance, as the TSE paradigm predicts, the model underestimates the size of the variance. By incorrectly specifying poll variance, this approach increases the standard errors of bias estimates (Pickup and Johnston 2008). Moreover, this specification unfairly rewards polls with larger samples and biases the estimate of true opinion toward those polls. In response to these shortcomings, Pickup and Johnston (2008) and Pickup et al. (2011) 3 HuffPollster (Pollster.com) initially used a simple (moving) averaging method, switched to LOESS, and then to Kalman filtering (Blumenthal 2010). 8

9 propose a new specification of poll variance: σi 2 = τ 2 p i (1 p i ). 4 This change reflects the fact that the standard error formula is only applicable to polls conducted via simple random sampling. When more complex sampling methods, e.g. stratified-cluster sampling, are used, the standard error must be adjusted to accommodate this design effect (Kish 1965; Gabler et al. 2008). Although Pickup and Johnston (2008) and Pickup et al. (2011) use a more realistic specification of poll variance, they still model the variance of survey error as a function of sample size and ignore the effects of non-sampling errors on variance, which is independent of sample size. 5 In this sense, they are still confined to a sampling error-centered perspective on the variance of survey error. A central innovation of our model is its ability to measure the actual variance of survey error without using a polls sample size as a prior. N i Dynamic Factor Analysis Models Existing techniques for estimating survey error focus on sample size and ignore nonsampling random error (see Model A and B). But according to the TSE paradigm, nonsampling error not only determines bias but also has a large effect on variance (see Model C). Accordingly, the size of variance is not determined a priori simply by a function of sample 4 In Pickup and Johnston (2008), the variance of survey is defined σi 2 = τ0 2 + τ2 p i (1 p i ) 1 N i where τ 2 0 represents herding behavior and τ2 1 represents design effects. However, herding behavior could only affect bias (and not variance). Pickup et al. (2011) exclude the term τ Pickup et al. (2011) conflate design effects and other non-sampling error sources. The authors definition of design effect includes error from weighting, interview mode effect, and recall bias, despite the fact that these are non-sampling random error sources. This is not the standard use of design effect in survey methodology. However this choice shows that the authors accept the idea that the variance of survey error is affected by other factors outside of sampling error. 9

10 size and reported estimates. 6 Model A: Survey error = sampling error Model B: Survey error = bias + variance ( = sampling error) Model C: Survey error = bias + variance (= sampling + non-sampling error) When poll variance is not determined a priori as a function of sample size and poll estimates, and when polls released by various firms that differ in sources of non-sampling error are to be aggregated together, a univariate time series method like Kalman filtering in the form of a dynamic linear or state-space model is inadequate for estimating the underlying population parameter. The univariate method must pool polls from different polling organizations in order to model them as a univariate time series. Since a poll is composed of an underlying true value, bias, and variance, two polls administrated on different days in the pooled dataset ought to feature different underlying true values, biases, and variances. This feature of pooled polls poses serious estimation problems for Kalman filtering when missing an accurate prior for the variance term. For example, a drop in public support for a candidate from one poll to another could indicate that actual support dropped, that the second poll was negatively biased against the candidate compared to the first poll, or that random error in the second poll incorrectly suggested decreased support. Thus the estimation of the underlying true value from these polls is inefficient and biased, as long as the size of variance is not known a priori. We avoid this problem by using multivariate time series or dynamic panel data. Simul- 6 House effects or house bias are often used as synonyms for polling bias or systematic error (for example, Pickup et al. 2011). However, polling organizations differ in their approaches to correcting for non-sampling sources of error, and so the size of variance from non-sampling errors also differs across polling firms. Any given poll can be more or less volatile than others even if they have exactly the same sample size. Thus, we do not use the word house effects to designate bias. 10

11 taneous observations in panel data differ only in systematic and random error, since both are meant to capture the same true value. If we assume that random error is normally distributed, we can estimate the model quite efficiently (Stock and Watson 2011). Unlike a univariate time series method, the DFM uses balanced panel data and takes advantage of its double correlation structure in estimation. In multivariate time series, polls conducted at the same time are very highly (cross-sectionally) correlated, since they are estimates of the same true value. Polls from a polling series are also auto-correlated. In estimating the underlying true value, the DFM makes use of the cross-sectional (contemporary) correlation of multivariate polling series with factor analysis and the auto-correlation with Kalman filtering. In this way, the DFM can efficiently estimate the underlying true value without prior knowledge of the size of a poll s variance. Using multivariate data to estimate a single latent variable is not a novel empirical approach. Existing latent variable analysis methods, including principal component analysis, factor analysis, and item response theory models each take advantage of multivariate data to estimate latent values. The DFM extends the same approach to time series data. This is why the DFM requires multivariate time series for estimation. The DFM can be applied to the study of pre-election polls. When N polling firms simultaneously conduct polls for T time periods, they produce a multivariate time series with a T N matrix of polls. The (time domain parametric) dynamic factor model representation of polls conducted at time t can be written as a linear state space model: 7 7 This is a time-domain parametric representation of dynamic factor model based on Stock and Watson s notation. In our model, observations are standardized before estimation as they are in factor analysis models. Thus, the observation equation is often represented without intercepts: Y t = λ(l) f t + e t. The inclusion of an intercept does not affect model estimation. 11

12 observation equation: Y t = α i + λ(l) f t + e t, where e t NID(0,σ 2 e ) transition equation: f t = Ψ(L) f t 1 + η t, where η t NID(0,σ 2 η) Although our representation of polls in the state-space form might appear to be similar to the dynamic linear models used in existing scholarship (e.g. Jackman 2005, Pickup and Johnston 2008), it is fundamentally different. In existing work, the dynamic linear model is a univariate time series method, and so Y t represents just one poll. However, in our model, Y t represents a vector of polls released on the same day: Y t1,y t2,,y tn. In this way, the DFM adopts the structure of vector autoregressive or error correction models. In multiple polling series, poll estimates (Y t1,y t2,,y tn ) are highly correlated, since they ought to be determined by true public opinion on that day, which allows for a factor analytic approach for parameter estimation. α is a vector of n different biases (deterministic intercepts), f t is a common factor(s), and e t is a vector of n random errors (mean-zero idiosyncratic disturbances) at time t. Thus, Y t, α, and e t are N 1 vectors, respectively. λ(l) is a vector of factor loadings: L is the lag operator. The lag polynomial matrix, λ(l), is an N q matrix. The representation above shows the decomposition of a poll: a poll estimate (Y ti ) is the linear combination of a bias in a polling organization, α i, a underlying true value at time t, f t, and a random error at time t, e t. We assume that poll biases are constant at time t and that random errors are normally distributed. The second equation, which is called the transition equation, shows the dynamics of the estimated factor (the underlying true value). When there are q dynamic factors, f t and η t in the transition equation are q 1 vectors. That is, not just one but q number of common factors are allowed in the DFM. Ψ(L) is a q q matrix that represents auto-correlation within the structure of the factors, that is, the over-time dynamic of underlying true public opinion. Each factor features an AR(q) process: when there is only one factor and it is auto-correlated with degree of 1, (AR(1)), the transition equation is simply f t = β f t

13 η t where η t is a normally distributed idiosyncratic disturbance term. 8 We assume that the idiosyncratic disturbance term in the observation equation and the transition equation are not correlated, even between lagged or lead terms: E(e t,η t k ) = 0 for all k. Finally, idiosyncratic disturbances in the observation equation are also not correlated with each other: E(e it,e js ) = 0 for all s if i j. In estimation, we use Kalman filtering to compute the Gaussian likelihood and MLE to estimate the parameters of the space-state model. 9 When a normal distribution of disturbances is assumed, Kalman filtering provides efficient estimates of the factors or unobserved state (Stock and Watson 2011). The DFM has several advantages over other proposed methods. Most importantly, it is a good match with the TSE paradigm. The model allows bias (α i ) to vary in size. The size of poll variance is estimated from the data, not pre-determined by sample size. The DFM is also flexible: factors (underlying true values) can be modeled as a stationary (e.g. Stock and Watson 1999 and 2001) or non-stationary (Chang, Miller, and Park 2009) AR(p) process. The flexibility of allowing an AR(p) process in the underlying true trend is an important feature: since a polling series consists of an n-day moving average, the autocorrelation in the data generating process should reflect this dependency The assumption that the two disturbance terms have Gaussian distributions is not necessary, but it eases the computational burden. And in the context of polling, this assumption should be uncontroversial. 9 MLE is not the only way to estimate factor analysis. Just as (static) factor analysis can be completed with principal component analysis, DFM can be estimated with PCA. The Bayesian approach to DFM uses MCMC in stead of MLE. 10 That is, we use AR(p) rather than AR(1). Autocorrelation tests indicates that 2012 polls are auto-correlated to degrees of 2 or 3. After conducting an autocorrelation test, we choose to use the AR(2) process. The difference between estimates from AR(3) and from AR(2) is negligible. 13

14 Pre-election polls provide an ideal application for dynamic factor modelling, especially when compared with applications in economics. In existing economic research, most economic and business measures are a combination of multiple factors with non-normally distributed random errors; idiosyncratic errors are often auto-correlated or cross-sectionally correlated (Boivin and Ng 2006). By contrast, pre-election polls are meant to capture only one factor (underlying true public opinion) and survey errors are meant to be normally distributed. These well-established features of polling series allow the DFM to produce an efficient estimate of the underlying factor. Our strategy for measuring the size of survey error is straightforward. First, we estimate the latent trend via the DFM using data from Gallup, Ipsos, Rand, and Rasumussen polls, which were conducted almost daily. These are the only four polling organizations which conducted their polls on an almost daily basis, and we include in our analysis every day that all four firms released results. Since we require a balanced panel, we exclude polls conducted on a more irregular basis. Boivin and Bai (2006) show that using more data to extract factor is not always desirable: the quality of the factor extracted from the DFM can deteriorate if more noised data is added. Four very highly correlated polling series should produce a highly reliable DFA estimate. Because the DFM estimate is scale-free, some additional assumptions are necessary to retain the data s original scale and to measure the absolute size of bias of each poll. Specifically, the last DFM estimate must be anchored to the actual election results. Next, we link the DFM estimate to all polls by averaging the factor loadings and using that quantity as a multiplier. After this re-scaling process, we compare poll estimates with the DFM estimate. The difference between a poll and the DFM estimate is our measure of survey error. The mean of survey errors for one polling organization is the bias of its polls and the variance of total survey errors reflects the random error of its polls. 14

15 Analysis: 2012 Pre-election Polls Although more than ninety organizations conducted at least one state- or nation-wide preelection poll in advance of the 2012 presidential election, we focus on nation-wide polls conducted by major polling organizations and restrict the sample further to those organizations that conducted at least seven polls in the last two months before the 2012 election. 11 During the last two months of presidential campaigns, several polling organizations conducted polls on a regular basis (Erikson, Panagopoulos, and Wlezien 2004). 12 In total, eleven polling firms meet our standards: ABC/WP, ARG, DailyKos/SEIU/PPP, Gallup, GWU/Politico Battleground, IBD/TIPP, Ipsos, Rand, Rasmussen, UPI/CVOTER, and YouGov/Economist. Using this data, we test the central argument of the TSE paradigm: that non-sampling errors predominate over sampling error. We find that bias, on average, accounts for two-thirds of survey error. In addition, we also show that the variance of survey errors does not decrease 11 To reliably measure the size of bias and variance of survey error, we need multiple polls from the same polling organization: the more polls in our sample, the more reliable our estimates. However, there is a tradeoff between the number of polls used as a cutoff for inclusion in the analysis and the number of polling organizations that can be included in analysis. We chose seven polls as the cutoff value, but there are no significant differences in our results if a similar arbitrary cutoff is adopted. 12 Erikson, Panagopoulos, and Wlezien (2004) recommend using pre-election polls conducted only after the Labor Day holiday, since early polls are more volatile and include a high percentage of undecided voters. Gelman and King (1993) show that support for presidential candidates varies widely over the course of the presidential campaign. They also suggest that only later polls should match election results because voters become more enlightened about their preferences over the course of the campaign. We follow both recommendations here. 15

16 with larger sample sizes. We use the measure A, proposed by Martin, Traugott, and Kennedy (2005), which involves comparing polled odds with the ratio of observed results, to assess total survey error. 13 By dividing support for one candidate by support for the other, Martin et al. (2005) argue that the measure A avoids bias from changing numbers of undecided voters. To calculate A, we first record the poll odds: for example, Romney s support divided by Obama s support. In this example, a value of 1 indicates a tie between Romney and Obama, values greater than 1 suggest that Romney leads, and values less than 1 suggest that Obama s leads. Next, we divide the poll odds by the actual odds, which provides an odds ratio. Finally, we take the logarithm of the odds ratio to produce the measure A. Positive values of A indicate polling bias for the Republican candidate and negative values show bias in favor of the Democratic candidate. 14 Figure 1 shows the poll odds for each day in the sample. The top panel of the figure shows poll odds for Gallup, Ipsos, Rasmussen, Rand, IBD, and ABC, while the bottom panel shows the odds for ARG, PPP, Politico, UPI, and YouGov. Both panels show the DFM estimate of true opinion across the last two months of the campaign. Romney s support appears to have decreased sharply in the aftermath of the Democratic National Convention, consistent with accounts of a convention bounce (Gelman and King 1993; Holbrook 1996; Zaller 2002; Silver 2012c). But Romney appears to have gained support between October 3rd 13 In the appendix, we also estimate error using the difference in estimated support for the two leading candidates relative to each candidate s estimated true support. This approach derives from Mosteller et al. s (1949) measure M5 except that it does not use the absolute value. Results are almost identical with either measure. 14 Although the measure A is designed for predictive accuracy - the comparison of final polls and actual results - it can be also used for non-final polls when we have estimates of true underlying candidate support over the campaign period. 16

17 Gallup Ipsos Rasmussen Rand IBD ABC DFM Estimate Vote Share: Romney/Obama Sep 2 Sep 4 Sep 5 Sep 6 Sep 7 Sep 8 Sep 10 Sep 11 Sep 13 Sep 17 Sep 18 Sep 19 Sep 20 Sep 24 Sep 25 Sep 26 Sep 27 Sep 28 Sep 1 Oct 2 Oct 5 Oct 6 Oct 7 Oct 8 Oct 10 Oct 11 Oct 12 Oct 14 Oct 15 Oct 16 Oct 17 Oct 18 Oct 19 Oct 20 Oct 22 Oct 23 Oct 24 Oct 25 Oct 26 Oct Vote Share: Romney/Obama 27 Oct 28 Oct 4 Nov 1 Sep 2 Sep 4 Sep 5 Sep 6 Sep 7 Sep 8 Sep 10 Sep 11 Sep 13 Sep 17 Sep 18 Sep 19 Sep 20 Sep 24 Sep 25 Sep 26 Sep 27 Sep 28 Sep 1 Oct 2 Oct 5 Oct 6 Oct 7 Oct 8 Oct 10 Oct 11 Oct 12 Oct 14 Oct 15 Oct 16 Oct 17 Oct 18 Oct 19 Oct 20 Oct 22 Oct 23 Oct 24 Oct 25 Oct 26 Oct 27 Oct 28 Oct 4 Nov ARG PPP Politico UPI YouGov DFM Estimate Figure 1: Poll Odds and DFM Estimated True Opinion, September 1, November 4 17

18 and October 11th, presumably as a result of his performance in the first presidential debate on October 3rd, which many political observers interpreted as a victory. Interestingly, the majority of polls, especially phone-based surveys, indicate that Romney held a lead over Obama until late October. However, the DFM estimate, which is always smaller than the poll ratio of 1, indicates that Obama never trailed by Romney at any time in October. The considerable difference between the DFM estimate and the poll odds depicted here indicates that many non-final polls featured sizable survey errors, reflecting a substantive bias for Romney. This finding comports with the fact that final polls in 2012 were biased in favor of Romney (Panagopoulos 2013; Silver 2012b; Blumenthal 2013). Indeed, they appear to be more biased in favor of Romney than the final polls. More importantly, we find that on average, bias accounts for the 65% of Mean Squared Error of the pre-election polls. 15 The relative contribution of bias is less than 80%, the amount suggested by early experimental studies (Groves and Magilavy 1984; Assael and Keon 1982). But although the effect of bias is somewhat smaller than expected, these results provide strong evidence in favor of the TSE paradigm. In addition, our analysis (results reported in Table 1) indicates that the relationship between MSE and bias is more nuanced than existing research suggests: for some polls, including IBD/TIPP and Rand, variance is larger than the squared bias, which indicates that the majority of survey error is due to random errors. We conjecture that this is partly because these firms weighting schemes are effective at reducing the size of bias, though that advantage comes at the expense of increased variance. Interestingly, Figure 1 also suggests that the extent of bias in each poll noticeably decreased at the last week of the campaign. Although there is a possibility that this drop in 15 In decomposing the relative contribution of bias and random error to survey error, the TSE approach advocates the use of mean squared error. MSE is equal to squared bias plus the variance of survey error: MSE=Bias 2 +Variance (Groves et al 2004). 18

19 bias corresponds with more stabilized public opinion at the end of the campaign, it is also possible that some polling firms engaged in herding behavior, adjusting their estimates to conform with those reported by other firms. If herding behavior did occur, a measure of predictive accuracy based only on final polls would dramatically understate the bias of a poll. In that sense, predictive accuracy is an unreliable measure of polling accuracy. We return to a discussion of herding behavior in 2012 pre-election polls below. Average Abs. Bias Number Sample Std. Error as % Polling Firm of Polls Size Bias Dev. A MSE MSE Mode ABC/Post % Phone ARG % Phone DailyKos/SEIU/PPP (D) % IVR Gallup % Phone IBD/TIPP % Phone Ipsos/Reuters % Internet Politico/GWU/Battleground % Phone Rand % Internet Rasmussen % IVR UPI/CVOTER % Phone YouGov/Economist % Internet Table 1: The Performance of Major Polling Organizations in 2012 The TSE paradigm also argues that the variance of survey error should be heavily influenced by non-sampling error sources. Thus even the remaining 35% of the observed MSE should not be attributed solely to sampling error. To evaluate this argument, we test whether the sample size of the pre-elections polls is related to the size of those polls variance. If sampling error largely determines the variance of survey error, sample size should be negatively correlated with poll variance. Figure 2 shows that variance does not decrease as the sample size increases. Rather, we observe a positive relationship between the two, a strong indication that the variance of survey error is substantively influenced by non-sampling error. Thus we caution poll analysts against using sample size as a prior for the variance term. 19

20 Gallup Variance YouGov IBD Politico ARG PPP ABC UPI Rasmussen Ipsos Rand Sample Size Figure 2: Average Sample Size and Poll Variance.1 Comparing the DFM Estimate with Poll Aggregators Our DFM estimate is not the only aggregate estimate of the 2012 pre-election polls. Jackman (2012), RealClearPolitics (2012), HuffPollster (Jackman 2012), and Nate Silver s FiveThirtyEight (2012) separately provide alternative aggregate estimates of true opinion trend. While the RCP uses a simple (moving) average method in estimation, the others use a Kalman filter-based method. 16 Silver uses a complex (and proprietary) formula that takes into account polling firms past performance, although it is unclear how that adjustment 16 HuffPollster s estimate is also calculated by Simon Jackman. While HuffPollster s estimate is an aggregate of polls, Jackman s is an estimate of each candidate s probability of winning. Not surprisingly, the two estimates are highly correlated (ρ =.99). 20

21 affects results. 17 Since these three approaches are based on the same underlying technique, all three measures are highly correlated (ρ = 0.81). 18 Vote Share: Romney/Obama DFM Estimate RCP Poll Pollster.com 538 Jackman Prob. 1 Sep 2 Sep 4 Sep 5 Sep 6 Sep 7 Sep 8 Sep 10 Sep 11 Sep 13 Sep 17 Sep 18 Sep 19 Sep 20 Sep 24 Sep 25 Sep 26 Sep 27 Sep 28 Sep 1 Oct 2 Oct 5 Oct 6 Oct 7 Oct 8 Oct 10 Oct 11 Oct 12 Oct 14 Oct 15 Oct 16 Oct 17 Oct 18 Oct 19 Oct 20 Oct 22 Oct 23 Oct Oct 25 Oct 26 Oct Pr(Romney Victory) 27 Oct 28 Oct 4 Nov Figure 3: Estimated True Opinion Across Poll Aggregators and Forecasters The DFM estimate is quite distinct from Kalman filter based estimates: it fluctuates noticeably more than the others, especially for the last month prior to the election (See Table 2 for correlations between estimates). While the Kalman filter-based estimates reflects increased stability at the end of the campaign, the DFM might fail to filter out random error from polls and therefore depict additional variance. But it is also possible that public opinion in the last month of the campaign was in fact more volatile than the Kalman filter based 17 There are other differences among those estimates. For example, FiveThirtyEight aims to allocate undecided voters, while HuffPollster and Real Clear Politics do not. The measure A should not be affected by this difference. 18 See the Online Appendix Table A1 for all correlation values between the estimates. 21

22 estimates suggest. To determine which account is more likely, we leverage data from the Rand (continuous) poll. The Rand poll is a non-probability panel survey in which the same group of respondents were asked to choose a candidate continuously over the course of the campaign, so it features very low random sampling error (Gutshche et al. 2014). As a result, the Rand polling series should provide the best estimate of over time trends in opinion, though the non-probability sample could engender bias. If random error is solely determined by sampling error (with potential design effects), as Kalman filter-based estimates assume, the Rand poll could be biased but should be free from random error. In that sense, it should be almost identical with Kalman filter-based estimates, as long as those estimates accurately map true public opinion. However, latent opinion, estimated by Jackman, HuffPollster, and FiveThirtyEight, is almost uncorrelated with the Rand poll (see Table 2). Additionally, the Rand poll is not as smooth as the Kalman filter estimates (Compare Figure 1 and Figure 2). The TSE paradigm explains this discrepancy: Kalman filter estimates ignore non-sampling random error and filter out random error with use of sample size as a prior, which is very likely to result in an imprecise estimate. On the other hand, the Rand poll may be free from sampling error but still contains non-sampling random error. Accordingly, we should expect different estimates. DFM Estimate Rand DFM Estimate RCP Jackman Pollster.com Table 2: Correlations Between Aggregators / Forecasts The DFM estimate and the Rand poll estimate are correlated at 0.637, which is a higher correlation than the that observed between our estimate and Kalman filter-based estimates (See Table 2). This suggests that the DFM estimate tracks an actual trend even if we follow 22

23 the Kalman filter assumptions. Another noticeable difference between the DFM estimate and the estimates of prominent aggregators is that during the last weeks of the campaign, we find a slight decrease in Obama s popularity, while other estimates show a sharp increase. This shift in the other estimate reflects the fact that many polls in the last few days of the election favored Obama much more than in previous weeks. For example, Obama reduced Romney s lead by 4 points in the last week Gallup polls. We doubt this dramatic change in public sentiment is unlikely to have actually occurred: voter s preferences should be very stable late in the campaign. Since that week coincided with Hurricane Sandy, Gallup suggests the bump in Obama s support resulted from his swift and effective reaction to the hurricane, which was widely covered by the mass media (Newport et al. 2012). But prominent poll analysts reject this interpretation (Silver 2012d, Enten 2012). Indeed, state-wide polls show that Obama gained 2.5 points after the hurricane only in Connecticut while Romney gained 1 to 1.5 points in other three states (Enten 2012). Thus, Hurricane Sandy seems to have no significant positive effect on Obama s popularity. These state poll results align closely with studies of the relationship between natural disasters and elections (e.g. Achen and Bartels 2008), which suggest that natural disasters lead voters to penalize incumbents, though that negative effect can be overwhelmed by incumbent efforts to provide financial aid to affected areas (e.g. Gasper and Reeves 2011). We know that Hurricane Sandy reached land just before the election and that federal aid was not immediately disbursed. Accordingly, we expect that Hurricane Sandy should decrease Obama s popularity, if it had any effect at all. As the Rand continuous poll - which should follow the trend most closely - reveals, Obama s lead fell from 5.1 points just prior to Sandy to 3.3 points in their final poll. Thus, we doubt that the sudden shift toward Obama in the final days of the 2012 campaign reflects a real change in mass opinion. To reinforce our suspicion, the sudden change in poll estimates also improved those polls accuracy and reduced their bias. Even if we admit that public opinion 23

24 might have sharply changed by a last-minute event, it is unlikely that the dramatic change in public opinion would also be accompanied by a considerable improvement in polling accuracy. When public opinion changes rapidly, rather, poll accuracy is likely to decrease as a result of opinion instability. We suspect that Romney s decrease in popularity during the last few days of the campaign instead occurred due to herding behavior (Blumenthal 2014; Silver 2012b; Linzer 2012; Clinton and Rogers 2013). Interestingly, a majority of pollsters appear to have changed their estimates in the direction of outliers, rather than toward the mean. Since poll herding usually indicates a change toward the majority of other estimates, it might be more accurate to label this form of poll manipulation calibration. That is, the polling industry simply realized that they were overestimating Romney s support in light of new information; from election markets, poll aggregators, and state polls, and changed their estimates accordingly. 19 Opinion estimates from poll aggregators seem to capture calibration behavior among polling firms. Many firms conducted a large number of polls, sometimes with increasingly larger sample sizes, in the last few days of the election. Thus, the very last or near last poll 19 During October 2012, Obama never trailed Romney in the Iowa election markets: he maintained a substantial lead. The prediction market Intrade also suggested that Obama would win; in the final month of betting, the chance of Obama s reelection was always higher than 50%. State-level polls predicted a clear victory for Obama despite results from national polls that suggested a substantial lead for Romney. As Silver (2012) suggests, either state-wide polls overestimated Obama s popularity or national polls underestimated it. At the same time, several October polls asked respondents: Regardless of who you might support, who do you think is most likely to win the presidential election? The results from this question, a survey item that is consistently better at predicting election outcomes than personal vote preference questions (Graefe 2014; Rothschild and Wolfers 2012), indicated that Obama would win by a margin ranging from 18% (Pew) to 30% (AP-GfK). 24

25 estimates, many of which might be calibrated by polling organizations to show an increase in Obama s popularity, seem to have had large effects on the final Kalman filter estimates. 20 On the other hand, the DFM is less sensitive to calibration behavior. Since the DFM uses panel data with a fixed number of measures of the underlying factor instead of a pooled data set, many of these last minute polls are not included in our analysis. Furthermore, in the DFM, only concurrent changes in polls are modeled as real changes in public opinion. Since polling firms did not engage in calibration behavior on the same day and in the same direction, the DFM is more likely to treat unsynchronized movements in polls as random error. In addition, the DFM estimate is less influenced by final polls because we model public opinion as an AR(2) rather than AR(1) process. Finally, we report the performance of pre-election polls throughout the 2012 campaign, using two measures. The average absolute difference of the measure A between each poll and the estimated actual opinion values appears in the fifth column of Table 1. We find that Rand and YouGov/Economist fared relatively well and Rasmussen and Gallup fared very poorly in tracking public opinion. The MSE provides another way to evaluate accuracy. Gallup features the largest average MSE, followed closely by Rasmussen. On the other hand, Rand and YouGov/Economist are tied for the top spot. Across these measures, there is consistent evidence that the Rand and YouGov/Economist polls performed best in 2012, while Rasmussen and Gallup performed worst. Internet surveys fared very well in 2012 presidential election: the three best performing polls, Rand, YouGov/Economist, and Ipsos/Reuters each drew online samples. The average absolute difference of the three internet polls is 0.032, which is much smaller than 0.074, the average for the six interviewer-assisted phone surveys and for the two IVR polls. 20 For example, by our count, there were 27 final polls from different organizations conducted immediately before election day. 25

Identifying Sources of Survey Error in 2012 Presidential Election Polls: A Dynamic Factor Analytic Approach

Identifying Sources of Survey Error in 2012 Presidential Election Polls: A Dynamic Factor Analytic Approach Identifying Sources of Survey Error in 2012 Presidential Election Polls: A Dynamic Factor Analytic Approach Jee-Kwang Park Assistant Professor Nazarbayev University jee.park.wws@gmail.com Adam G. Hughes

More information

Forecasting the 2012 U.S. Presidential Election: Should we Have Known Obama Would Win All Along?

Forecasting the 2012 U.S. Presidential Election: Should we Have Known Obama Would Win All Along? Forecasting the 2012 U.S. Presidential Election: Should we Have Known Obama Would Win All Along? Robert S. Erikson Columbia University Keynote Address IDC Conference on The Presidential Election of 2012:

More information

What is The Probability Your Vote will Make a Difference?

What is The Probability Your Vote will Make a Difference? Berkeley Law From the SelectedWorks of Aaron Edlin 2009 What is The Probability Your Vote will Make a Difference? Andrew Gelman, Columbia University Nate Silver Aaron S. Edlin, University of California,

More information

Lab 3: Logistic regression models

Lab 3: Logistic regression models Lab 3: Logistic regression models In this lab, we will apply logistic regression models to United States (US) presidential election data sets. The main purpose is to predict the outcomes of presidential

More information

Patterns of Poll Movement *

Patterns of Poll Movement * Patterns of Poll Movement * Public Perspective, forthcoming Christopher Wlezien is Reader in Comparative Government and Fellow of Nuffield College, University of Oxford Robert S. Erikson is a Professor

More information

The Horse Race: What Polls Reveal as the Election Campaign Unfolds

The Horse Race: What Polls Reveal as the Election Campaign Unfolds The Horse Race: What Polls Reveal as the Election Campaign Unfolds Christopher Wlezien Temple University Robert S. Erikson Columbia University International Journal of Public Opinion Research, forthcoming

More information

1. A Republican edge in terms of self-described interest in the election. 2. Lower levels of self-described interest among younger and Latino

1. A Republican edge in terms of self-described interest in the election. 2. Lower levels of self-described interest among younger and Latino 2 Academics use political polling as a measure about the viability of survey research can it accurately predict the result of a national election? The answer continues to be yes. There is compelling evidence

More information

Bias Correction by Sub-population Weighting for the 2016 United States Presidential Election

Bias Correction by Sub-population Weighting for the 2016 United States Presidential Election American Journal of Applied Mathematics and Statistics, 2017, Vol. 5, No. 3, 101-105 Available online at http://pubs.sciepub.com/ajams/5/3/3 Science and Education Publishing DOI:10.12691/ajams-5-3-3 Bias

More information

A Dead Heat and the Electoral College

A Dead Heat and the Electoral College A Dead Heat and the Electoral College Robert S. Erikson Department of Political Science Columbia University rse14@columbia.edu Karl Sigman Department of Industrial Engineering and Operations Research sigman@ieor.columbia.edu

More information

The Timeline Method of Studying Electoral Dynamics. Christopher Wlezien, Will Jennings, and Robert S. Erikson

The Timeline Method of Studying Electoral Dynamics. Christopher Wlezien, Will Jennings, and Robert S. Erikson The Timeline Method of Studying Electoral Dynamics by Christopher Wlezien, Will Jennings, and Robert S. Erikson 1 1. Author affiliation information CHRISTOPHER WLEZIEN is Hogg Professor of Government at

More information

Practice Questions for Exam #2

Practice Questions for Exam #2 Fall 2007 Page 1 Practice Questions for Exam #2 1. Suppose that we have collected a stratified random sample of 1,000 Hispanic adults and 1,000 non-hispanic adults. These respondents are asked whether

More information

Changes in Party Identification among U.S. Adult Catholics in CARA Polls, % 48% 39% 41% 38% 30% 37% 31%

Changes in Party Identification among U.S. Adult Catholics in CARA Polls, % 48% 39% 41% 38% 30% 37% 31% The Center for Applied Research in the Apostolate Georgetown University June 20, 2008 Election 08 Forecast: Democrats Have Edge among U.S. Catholics The Catholic electorate will include more than 47 million

More information

1. The Relationship Between Party Control, Latino CVAP and the Passage of Bills Benefitting Immigrants

1. The Relationship Between Party Control, Latino CVAP and the Passage of Bills Benefitting Immigrants The Ideological and Electoral Determinants of Laws Targeting Undocumented Migrants in the U.S. States Online Appendix In this additional methodological appendix I present some alternative model specifications

More information

Red Oak Strategic Presidential Poll

Red Oak Strategic Presidential Poll Red Oak Strategic Presidential Poll Fielded 9/1-9/2 Using Google Consumer Surveys Results, Crosstabs, and Technical Appendix 1 This document contains the full crosstab results for Red Oak Strategic s Presidential

More information

arxiv: v4 [stat.ap] 17 Jan 2019

arxiv: v4 [stat.ap] 17 Jan 2019 Please cite as: Bon, J. J., Ballard, T. and Baffour, B. (2019), Polling bias and undecided voter allocations: US presidential elections, 2004 2016. Journal of the Royal Statistical Society, Series A (Statistics

More information

Friends of Democracy Corps and Greenberg Quinlan Rosner Research. Stan Greenberg and James Carville, Democracy Corps

Friends of Democracy Corps and Greenberg Quinlan Rosner Research. Stan Greenberg and James Carville, Democracy Corps Date: January 13, 2009 To: From: Friends of Democracy Corps and Greenberg Quinlan Rosner Research Stan Greenberg and James Carville, Democracy Corps Anna Greenberg and John Brach, Greenberg Quinlan Rosner

More information

Why We Need a Better Approach

Why We Need a Better Approach 1 Why We Need a Better Approach 2 Why We Need a Better Approach will decide this weekend McCain by half a point Too close to call Obama- Biden Too close to call 3 Why We Need a Better Approach 4 Why We

More information

Introduction to Path Analysis: Multivariate Regression

Introduction to Path Analysis: Multivariate Regression Introduction to Path Analysis: Multivariate Regression EPSY 905: Multivariate Analysis Spring 2016 Lecture #7 March 9, 2016 EPSY 905: Multivariate Regression via Path Analysis Today s Lecture Multivariate

More information

Forecasting Elections: Voter Intentions versus Expectations *

Forecasting Elections: Voter Intentions versus Expectations * Forecasting Elections: Voter Intentions versus Expectations * David Rothschild Yahoo! Research David@ReseachDMR.com www.researchdmr.com Justin Wolfers The Wharton School, University of Pennsylvania Brookings,

More information

Forecasting the 2018 Midterm Election using National Polls and District Information

Forecasting the 2018 Midterm Election using National Polls and District Information Forecasting the 2018 Midterm Election using National Polls and District Information Joseph Bafumi, Dartmouth College Robert S. Erikson, Columbia University Christopher Wlezien, University of Texas at Austin

More information

Understanding persuasion and activation in presidential campaigns: The random walk and mean-reversion models 1

Understanding persuasion and activation in presidential campaigns: The random walk and mean-reversion models 1 Understanding persuasion and activation in presidential campaigns: The random walk and mean-reversion models 1 Noah Kaplan, David K. Park, and Andrew Gelman 6 July 2012 Abstract. Political campaigns are

More information

YouGov Results in 2010 U.S. Elections

YouGov Results in 2010 U.S. Elections Results in 2010 U.S. Elections In 2010, polled every week for The Economist on vote intentions for the U.S. House of Representatives. also released results for 25 and races in the week prior to the election.

More information

Disentangling Bias and Variance in Election Polls

Disentangling Bias and Variance in Election Polls Disentangling Bias and Variance in Election Polls Houshmand Shirani-Mehr Stanford University Sharad Goel Stanford University David Rothschild Microsoft Research Andrew Gelman Columbia University Abstract

More information

arxiv: v3 [stat.ap] 27 Feb 2018

arxiv: v3 [stat.ap] 27 Feb 2018 Polling bias and undecided voter allocations: US presidential elections, 2004 2016 arxiv:1703.09430v3 [stat.ap] 27 Feb 2018 Joshua J Bon School of Mathematics and Statistics, University of Western Australia,

More information

Proposal for the 2016 ANES Time Series. Quantitative Predictions of State and National Election Outcomes

Proposal for the 2016 ANES Time Series. Quantitative Predictions of State and National Election Outcomes Proposal for the 2016 ANES Time Series Quantitative Predictions of State and National Election Outcomes Keywords: Election predictions, motivated reasoning, natural experiments, citizen competence, measurement

More information

Supplementary Materials A: Figures for All 7 Surveys Figure S1-A: Distribution of Predicted Probabilities of Voting in Primary Elections

Supplementary Materials A: Figures for All 7 Surveys Figure S1-A: Distribution of Predicted Probabilities of Voting in Primary Elections Supplementary Materials (Online), Supplementary Materials A: Figures for All 7 Surveys Figure S-A: Distribution of Predicted Probabilities of Voting in Primary Elections (continued on next page) UT Republican

More information

A Vote Equation and the 2004 Election

A Vote Equation and the 2004 Election A Vote Equation and the 2004 Election Ray C. Fair November 22, 2004 1 Introduction My presidential vote equation is a great teaching example for introductory econometrics. 1 The theory is straightforward,

More information

Multi-Mode Political Surveys

Multi-Mode Political Surveys Multi-Mode Political Surveys Submitted to AAPOR Annual Conference By Jackie Redman, Scottie Thompson, Berwood Yost, and Katherine Everts Center for Opinion Research May 2017 2 Multi-Mode Political Surveys

More information

Disentangling Bias and Variance in Election Polls

Disentangling Bias and Variance in Election Polls Disentangling Bias and Variance in Election Polls Houshmand Shirani-Mehr Stanford University Sharad Goel Stanford University David Rothschild Microsoft Research Andrew Gelman Columbia University February

More information

Model of Voting. February 15, Abstract. This paper uses United States congressional district level data to identify how incumbency,

Model of Voting. February 15, Abstract. This paper uses United States congressional district level data to identify how incumbency, U.S. Congressional Vote Empirics: A Discrete Choice Model of Voting Kyle Kretschman The University of Texas Austin kyle.kretschman@mail.utexas.edu Nick Mastronardi United States Air Force Academy nickmastronardi@gmail.com

More information

Polling in the United States

Polling in the United States Polling in the United States D. SUNSHINE HILLYGUS and BRIAN GUAY POLLS are an integral part of political campaigns in the United States. News headlines highlight the latest polling results, pundits dramatize

More information

2016 Presidential Elections

2016 Presidential Elections 2016 Presidential Elections Using demographic and socio economic factors of the U.S. population, which candidate will prevail on a county by county basis for the states of Ohio and Florida? URP 4273 Juna

More information

The Job of President and the Jobs Model Forecast: Obama for '08?

The Job of President and the Jobs Model Forecast: Obama for '08? Department of Political Science Publications 10-1-2008 The Job of President and the Jobs Model Forecast: Obama for '08? Michael S. Lewis-Beck University of Iowa Charles Tien Copyright 2008 American Political

More information

A Critical Assessment of the Determinants of Presidential Election Outcomes

A Critical Assessment of the Determinants of Presidential Election Outcomes Trinity University Digital Commons @ Trinity Undergraduate Student Research Awards Information Literacy Committee 3-21-2013 A Critical Assessment of the Determinants of Presidential Election Outcomes Ryan

More information

Case Study: Get out the Vote

Case Study: Get out the Vote Case Study: Get out the Vote Do Phone Calls to Encourage Voting Work? Why Randomize? This case study is based on Comparing Experimental and Matching Methods Using a Large-Scale Field Experiment on Voter

More information

NH Statewide Horserace Poll

NH Statewide Horserace Poll NH Statewide Horserace Poll NH Survey of Likely Voters October 26-28, 2016 N=408 Trump Leads Clinton in Final Stretch; New Hampshire U.S. Senate Race - Ayotte 49.1, Hassan 47 With just over a week to go

More information

The RAND 2016 Presidential Election Panel Survey (PEPS) Michael Pollard, Joshua Mendelsohn, Alerk Amin

The RAND 2016 Presidential Election Panel Survey (PEPS) Michael Pollard, Joshua Mendelsohn, Alerk Amin The RAND 2016 Presidential Election Panel Survey (PEPS) Michael Pollard, Joshua Mendelsohn, Alerk Amin mpollard@rand.org May 14, 2016 Six surveys throughout election season Comprehensive baseline in December

More information

Google Consumer Surveys Presidential Poll Fielded 8/18-8/19

Google Consumer Surveys Presidential Poll Fielded 8/18-8/19 Google Consumer Surveys Presidential Poll Fielded 8/18-8/19 Results, Crosstabs, and Technical Appendix 1 This document contains the full crosstab results for Red Oak Strategic's Google Consumer Surveys

More information

Incumbency Advantages in the Canadian Parliament

Incumbency Advantages in the Canadian Parliament Incumbency Advantages in the Canadian Parliament Chad Kendall Department of Economics University of British Columbia Marie Rekkas* Department of Economics Simon Fraser University mrekkas@sfu.ca 778-782-6793

More information

Team 1 IBM UNH

Team 1 IBM UNH Team 1 IBM Hackathon @ UNH UNH Analytics Logan Mortenson Colin Cambo Shane Piesik The Current National Election Polls ü To start our analysis we examined the current status of the presidential race. ü

More information

Predicting Elections from the Most Important Issue: A Test of the Take-the-Best Heuristic

Predicting Elections from the Most Important Issue: A Test of the Take-the-Best Heuristic University of Pennsylvania ScholarlyCommons Marketing Papers Wharton School 7-20-2010 Predicting Elections from the Most Important Issue: A Test of the Take-the-Best Heuristic J. Scott Armstrong University

More information

A positive correlation between turnout and plurality does not refute the rational voter model

A positive correlation between turnout and plurality does not refute the rational voter model Quality & Quantity 26: 85-93, 1992. 85 O 1992 Kluwer Academic Publishers. Printed in the Netherlands. Note A positive correlation between turnout and plurality does not refute the rational voter model

More information

NEWS RELEASE. Poll Shows Tight Races Obama Leads Clinton. Democratic Primary Election Vote Intention for Obama & Clinton

NEWS RELEASE. Poll Shows Tight Races Obama Leads Clinton. Democratic Primary Election Vote Intention for Obama & Clinton NEWS RELEASE FOR IMMEDIATE RELEASE: April 18, 2008 Contact: Michael Wolf, Assistant Professor of Political Science, 260-481-6898 Andrew Downs, Assistant Professor of Political Science, 260-481-6691 Poll

More information

IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF PENNSYLVANIA

IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF PENNSYLVANIA IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF PENNSYLVANIA Mahari Bailey, et al., : Plaintiffs : C.A. No. 10-5952 : v. : : City of Philadelphia, et al., : Defendants : PLAINTIFFS EIGHTH

More information

Report for the Associated Press: Illinois and Georgia Election Studies in November 2014

Report for the Associated Press: Illinois and Georgia Election Studies in November 2014 Report for the Associated Press: Illinois and Georgia Election Studies in November 2014 Randall K. Thomas, Frances M. Barlas, Linda McPetrie, Annie Weber, Mansour Fahimi, & Robert Benford GfK Custom Research

More information

Methodology. 1 State benchmarks are from the American Community Survey Three Year averages

Methodology. 1 State benchmarks are from the American Community Survey Three Year averages The Choice is Yours Comparing Alternative Likely Voter Models within Probability and Non-Probability Samples By Robert Benford, Randall K Thomas, Jennifer Agiesta, Emily Swanson Likely voter models often

More information

Working Paper: The Effect of Electronic Voting Machines on Change in Support for Bush in the 2004 Florida Elections

Working Paper: The Effect of Electronic Voting Machines on Change in Support for Bush in the 2004 Florida Elections Working Paper: The Effect of Electronic Voting Machines on Change in Support for Bush in the 2004 Florida Elections Michael Hout, Laura Mangels, Jennifer Carlson, Rachel Best With the assistance of the

More information

Santorum loses ground. Romney has reclaimed Michigan by 7.91 points after the CNN debate.

Santorum loses ground. Romney has reclaimed Michigan by 7.91 points after the CNN debate. Santorum loses ground. Romney has reclaimed Michigan by 7.91 points after the CNN debate. February 25, 2012 Contact: Eric Foster, Foster McCollum White and Associates 313-333-7081 Cell Email: efoster@fostermccollumwhite.com

More information

Ipsos Poll Conducted for Reuters Daily Election Tracking:

Ipsos Poll Conducted for Reuters Daily Election Tracking: : 11.01.12 These are findings from an Ipsos poll conducted for Thomson Reuters from Oct. 28-Nov. 1, 2012. For the survey, a sample of 5,575 American registered voters and 4,556 Likely Voters (all age 18

More information

THE LOUISIANA SURVEY 2018

THE LOUISIANA SURVEY 2018 THE LOUISIANA SURVEY 2018 Criminal justice reforms and Medicaid expansion remain popular with Louisiana public Popular support for work requirements and copayments for Medicaid The fifth in a series of

More information

The result of the 2015 UK General Election came as a shock to most observers. During the months and

The result of the 2015 UK General Election came as a shock to most observers. During the months and 1. Introduction The result of the 2015 UK General Election came as a shock to most observers. During the months and weeks leading up to election day on the 7 th of May, the opinion polls consistently indicated

More information

Ohio State University

Ohio State University Fake News Did Have a Significant Impact on the Vote in the 2016 Election: Original Full-Length Version with Methodological Appendix By Richard Gunther, Paul A. Beck, and Erik C. Nisbet Ohio State University

More information

2012 Presidential Race Is its Own Perfect Storm

2012 Presidential Race Is its Own Perfect Storm ABC NEWS/WASHINGTON POST POLL: Election Tracking No. 7 EMBARGOED FOR RELEASE AFTER 12:01 a.m. Monday, Oct. 29, 2012 2012 Presidential Race Is its Own Perfect Storm As it enters its frenetic final week

More information

Combining national and constituency polling for forecasting

Combining national and constituency polling for forecasting Combining national and constituency polling for forecasting Chris Hanretty, Ben Lauderdale, Nick Vivyan Abstract We describe a method for forecasting British general elections by combining national and

More information

Following the Leader: The Impact of Presidential Campaign Visits on Legislative Support for the President's Policy Preferences

Following the Leader: The Impact of Presidential Campaign Visits on Legislative Support for the President's Policy Preferences University of Colorado, Boulder CU Scholar Undergraduate Honors Theses Honors Program Spring 2011 Following the Leader: The Impact of Presidential Campaign Visits on Legislative Support for the President's

More information

Erie County and the Trump Administration

Erie County and the Trump Administration Erie County and the Trump Administration A Survey of 409 Registered Voters in Erie County, Pennsylvania Prepared by: The Mercyhurst Center for Applied Politics at Mercyhurst University Joseph M. Morris,

More information

Issues vs. the Horse Race

Issues vs. the Horse Race The Final Hours: Issues vs. the Horse Race Presidential Campaign Watch November 3 rd, 2008 - Is the economy still the key issue of the campaign? - How are the different networks covering the candidates?

More information

Ipsos Poll Conducted for Reuters Daily Election Tracking:

Ipsos Poll Conducted for Reuters Daily Election Tracking: : 11.05.12 These are findings from an Ipsos poll conducted for Thomson Reuters from Nov. 1.-5, 2012. For the survey, a sample of 5,643 American registered voters and 4,725 Likely Voters (all age 18 and

More information

Voters Divided Over Who Will Win Second Debate

Voters Divided Over Who Will Win Second Debate OCTOBER 15, 2012 Neither Candidate Viewed as Too Personally Critical Voters Divided Over Who Will Win Second Debate FOR FURTHER INFORMATION CONTACT: Andrew Kohut President, Pew Research Center Carroll

More information

UC Davis UC Davis Previously Published Works

UC Davis UC Davis Previously Published Works UC Davis UC Davis Previously Published Works Title Constitutional design and 2014 senate election outcomes Permalink https://escholarship.org/uc/item/8kx5k8zk Journal Forum (Germany), 12(4) Authors Highton,

More information

Dynamics in Partisanship during American Presidential Campaigns

Dynamics in Partisanship during American Presidential Campaigns Public Opinion Quarterly, Vol. 78, Special Issue, 2014, pp. 303 329 Dynamics in Partisanship during American Presidential Campaigns Corwin D. Smidt* Abstract Despite their potential importance, little

More information

The Seventeenth Amendment, Senate Ideology, and the Growth of Government

The Seventeenth Amendment, Senate Ideology, and the Growth of Government The Seventeenth Amendment, Senate Ideology, and the Growth of Government Danko Tarabar College of Business and Economics 1601 University Ave, PO BOX 6025 West Virginia University Phone: 681-212-9983 datarabar@mix.wvu.edu

More information

Predicting Presidential Elections: An Evaluation of Forecasting

Predicting Presidential Elections: An Evaluation of Forecasting Predicting Presidential Elections: An Evaluation of Forecasting Megan Page Pratt Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the

More information

Exposing Media Election Myths

Exposing Media Election Myths Exposing Media Election Myths 1 There is no evidence of election fraud. 2 Bush 48% approval in 2004 does not indicate he stole the election. 3 Pre-election polls in 2004 did not match the exit polls. 4

More information

Polling and Politics. Josh Clinton Abby and Jon Winkelried Chair Vanderbilt University

Polling and Politics. Josh Clinton Abby and Jon Winkelried Chair Vanderbilt University Polling and Politics Josh Clinton Abby and Jon Winkelried Chair Vanderbilt University (Too much) Focus on the campaign News coverage much more focused on horserace than policy 3 4 5 Tell me again how you

More information

Three-way tie among Dems; Thompson still leads Republicans

Three-way tie among Dems; Thompson still leads Republicans FOR IMMEDIATE RELEASE CONTACT: DEAN DEBNAM July 5, 2007 888-621-6988 / 919-880-4888 Three-way tie among Dems; Thompson still leads Republicans Raleigh, N.C. According to the latest Public Policy Polling

More information

oductivity Estimates for Alien and Domestic Strawberry Workers and the Number of Farm Workers Required to Harvest the 1988 Strawberry Crop

oductivity Estimates for Alien and Domestic Strawberry Workers and the Number of Farm Workers Required to Harvest the 1988 Strawberry Crop oductivity Estimates for Alien and Domestic Strawberry Workers and the Number of Farm Workers Required to Harvest the 1988 Strawberry Crop Special Report 828 April 1988 UPI! Agricultural Experiment Station

More information

Obama s Support is Broadly Based; McCain Now -10 on the Economy

Obama s Support is Broadly Based; McCain Now -10 on the Economy ABC NEWS/WASHINGTON POST POLL: ELECTION TRACKING #8 EMBARGOED FOR RELEASE AFTER 5 p.m. Monday, Oct. 27, 2008 Obama s Support is Broadly Based; McCain Now -10 on the Economy With a final full week of campaigning

More information

Are policy makers out of step with their constituency when it comes to immigration?

Are policy makers out of step with their constituency when it comes to immigration? Are policy makers out of step with their constituency when it comes to immigration? Margaret E. Peters, Stanford University Alexander M. Tahk, University of Wisconsin-Madison November 13, 2010 Puzzle:

More information

Report for the Associated Press. November 2015 Election Studies in Kentucky and Mississippi. Randall K. Thomas, Frances M. Barlas, Linda McPetrie,

Report for the Associated Press. November 2015 Election Studies in Kentucky and Mississippi. Randall K. Thomas, Frances M. Barlas, Linda McPetrie, Report for the Associated Press November 2015 Election Studies in Kentucky and Mississippi Randall K. Thomas, Frances M. Barlas, Linda McPetrie, Annie Weber, Mansour Fahimi, & Robert Benford GfK Custom

More information

Chapter 6 Online Appendix. general these issues do not cause significant problems for our analysis in this chapter. One

Chapter 6 Online Appendix. general these issues do not cause significant problems for our analysis in this chapter. One Chapter 6 Online Appendix Potential shortcomings of SF-ratio analysis Using SF-ratios to understand strategic behavior is not without potential problems, but in general these issues do not cause significant

More information

Pennsylvania Republicans: Leadership and the Fiscal Cliff

Pennsylvania Republicans: Leadership and the Fiscal Cliff Pennsylvania Republicans: Leadership and the Fiscal Cliff A Survey of 430 Registered Republicans in Pennsylvania Prepared by: The Mercyhurst Center for Applied Politics at Mercyhurst University Joseph

More information

Daily Effects on Presidential Candidate Choice

Daily Effects on Presidential Candidate Choice Daily Effects on Presidential Candidate Choice Jonathan Day University of Iowa Introduction At 11:00pm Eastern time all three major cable networks, CNN, MSNBC, and FOX, projected Barack Obama to be the

More information

AP PHOTO/MATT VOLZ. Voter Trends in A Final Examination. By Rob Griffin, Ruy Teixeira, and John Halpin November 2017

AP PHOTO/MATT VOLZ. Voter Trends in A Final Examination. By Rob Griffin, Ruy Teixeira, and John Halpin November 2017 AP PHOTO/MATT VOLZ Voter Trends in 2016 A Final Examination By Rob Griffin, Ruy Teixeira, and John Halpin November 2017 WWW.AMERICANPROGRESS.ORG Voter Trends in 2016 A Final Examination By Rob Griffin,

More information

WHAT IS THE PROBABILITY YOUR VOTE WILL MAKE A DIFFERENCE?

WHAT IS THE PROBABILITY YOUR VOTE WILL MAKE A DIFFERENCE? WHAT IS THE PROBABILITY YOUR VOTE WILL MAKE A DIFFERENCE? ANDREW GELMAN, NATE SILVER and AARON EDLIN One of the motivations for voting is that one vote can make a difference. In a presidential election,

More information

Voter ID Pilot 2018 Public Opinion Survey Research. Prepared on behalf of: Bridget Williams, Alexandra Bogdan GfK Social and Strategic Research

Voter ID Pilot 2018 Public Opinion Survey Research. Prepared on behalf of: Bridget Williams, Alexandra Bogdan GfK Social and Strategic Research Voter ID Pilot 2018 Public Opinion Survey Research Prepared on behalf of: Prepared by: Issue: Bridget Williams, Alexandra Bogdan GfK Social and Strategic Research Final Date: 08 August 2018 Contents 1

More information

The Dynamics of Voter Preferences in the 2016 Presidential Election. Costas Panagopoulos Professor of Political Science Northeastern University

The Dynamics of Voter Preferences in the 2016 Presidential Election. Costas Panagopoulos Professor of Political Science Northeastern University The Dynamics of Voter Preferences in the 2016 Presidential Election Costas Panagopoulos Professor of Political Science Northeastern University Aaron Weinschenk Associate Professor of Political Science

More information

FOR RELEASE: TUESDAY, DECEMBER 19 AT 4 PM

FOR RELEASE: TUESDAY, DECEMBER 19 AT 4 PM P O L L Interviews with 1,019 adult Americans conducted by telephone by Opinion Research Corporation on December, 2006. The margin of sampling error for results based on the total sample is plus or minus

More information

THE DEMOCRATS IN NEW HAMPSHIRE January 5-6, 2008

THE DEMOCRATS IN NEW HAMPSHIRE January 5-6, 2008 FOR RELEASE: Monday, January 7, 2008 11:00am ET THE DEMOCRATS IN NEW HAMPSHIRE January 5-6, 2008 Only 27 of Democratic primary voters in New Hampshire say the results of the Iowa caucuses were important

More information

Get Your Research Right: An AmeriSpeak Breakfast Event. September 18, 2018 Washington, DC

Get Your Research Right: An AmeriSpeak Breakfast Event. September 18, 2018 Washington, DC Get Your Research Right: An AmeriSpeak Breakfast Event September 18, 2018 Washington, DC Get Your Research Right Today s Speakers Ipek Bilgen, Sr. Methodologist Trevor Tompson, Vice President NORC Experts

More information

On the Causes and Consequences of Ballot Order Effects

On the Causes and Consequences of Ballot Order Effects Polit Behav (2013) 35:175 197 DOI 10.1007/s11109-011-9189-2 ORIGINAL PAPER On the Causes and Consequences of Ballot Order Effects Marc Meredith Yuval Salant Published online: 6 January 2012 Ó Springer

More information

Federal Primary Election Runoffs and Voter Turnout Decline,

Federal Primary Election Runoffs and Voter Turnout Decline, Federal Primary Election Runoffs and Voter Turnout Decline, 1994-2010 July 2011 By: Katherine Sicienski, William Hix, and Rob Richie Summary of Facts and Findings Near-Universal Decline in Turnout: Of

More information

Obama Gains Among Former Clinton Supporters

Obama Gains Among Former Clinton Supporters September 2, 2008 Obama Gains Among Former Clinton Supporters Obama gains on other dimensions, including terrorism and leadership by Frank Newport PRINCETON, NJ -- The Democratic convention appears to

More information

BY Jeffrey Gottfried, Galen Stocking and Elizabeth Grieco

BY Jeffrey Gottfried, Galen Stocking and Elizabeth Grieco FOR RELEASE SEPTEMBER 25, 2018 BY Jeffrey Gottfried, Galen Stocking and Elizabeth Grieco FOR MEDIA OR OTHER INQUIRIES: Jeffrey Gottfried, Senior Researcher Amy Mitchell, Director, Journalism Research Rachel

More information

Immigrant Legalization

Immigrant Legalization Technical Appendices Immigrant Legalization Assessing the Labor Market Effects Laura Hill Magnus Lofstrom Joseph Hayes Contents Appendix A. Data from the 2003 New Immigrant Survey Appendix B. Measuring

More information

French Polls and the Aftermath of by Claire Durand, professor, Department of Sociology, Université de Montreal

French Polls and the Aftermath of by Claire Durand, professor, Department of Sociology, Université de Montreal French Polls and the Aftermath of 2002 by Claire Durand, professor, Department of Sociology, Université de Montreal In the recent presidential campaign of 2007, French pollsters were under close scrutiny.

More information

Journals in the Discipline: A Report on a New Survey of American Political Scientists

Journals in the Discipline: A Report on a New Survey of American Political Scientists THE PROFESSION Journals in the Discipline: A Report on a New Survey of American Political Scientists James C. Garand, Louisiana State University Micheal W. Giles, Emory University long with books, scholarly

More information

POLL: CLINTON MAINTAINS BIG LEAD OVER TRUMP IN BAY STATE. As early voting nears, Democrat holds 32-point advantage in presidential race

POLL: CLINTON MAINTAINS BIG LEAD OVER TRUMP IN BAY STATE. As early voting nears, Democrat holds 32-point advantage in presidential race DATE: Oct. 6, FOR FURTHER INFORMATION, CONTACT: Brian Zelasko at 413-796-2261 (office) or 413 297-8237 (cell) David Stawasz at 413-796-2026 (office) or 413-214-8001 (cell) POLL: CLINTON MAINTAINS BIG LEAD

More information

Monthly Census Bureau data show that the number of less-educated young Hispanic immigrants in the

Monthly Census Bureau data show that the number of less-educated young Hispanic immigrants in the Backgrounder Center for Immigration Studies July 2009 A Shifting Tide Recent Trends in the Illegal Immigrant Population By Steven A. Camarota and Karen Jensenius Monthly Census Bureau data show that the

More information

Gender preference and age at arrival among Asian immigrant women to the US

Gender preference and age at arrival among Asian immigrant women to the US Gender preference and age at arrival among Asian immigrant women to the US Ben Ost a and Eva Dziadula b a Department of Economics, University of Illinois at Chicago, 601 South Morgan UH718 M/C144 Chicago,

More information

Case 1:17-cv TCB-WSD-BBM Document 94-1 Filed 02/12/18 Page 1 of 37

Case 1:17-cv TCB-WSD-BBM Document 94-1 Filed 02/12/18 Page 1 of 37 Case 1:17-cv-01427-TCB-WSD-BBM Document 94-1 Filed 02/12/18 Page 1 of 37 REPLY REPORT OF JOWEI CHEN, Ph.D. In response to my December 22, 2017 expert report in this case, Defendants' counsel submitted

More information

Biases in Message Credibility and Voter Expectations EGAP Preregisration GATED until June 28, 2017 Summary.

Biases in Message Credibility and Voter Expectations EGAP Preregisration GATED until June 28, 2017 Summary. Biases in Message Credibility and Voter Expectations EGAP Preregisration GATED until June 28, 2017 Summary. Election polls in horserace coverage characterize a competitive information environment with

More information

Vote Compass Methodology

Vote Compass Methodology Vote Compass Methodology 1 Introduction Vote Compass is a civic engagement application developed by the team of social and data scientists from Vox Pop Labs. Its objective is to promote electoral literacy

More information

Model-Based Pre-Election Polling for National and Sub-National Outcomes in the US and UK

Model-Based Pre-Election Polling for National and Sub-National Outcomes in the US and UK Model-Based Pre-Election Polling for National and Sub-National Outcomes in the US and UK Benjamin E Lauderdale, London School of Economics * Delia Bailey, YouGov Jack Blumenau, University College London

More information

A Behavioral Measure of the Enthusiasm Gap in American Elections

A Behavioral Measure of the Enthusiasm Gap in American Elections A Behavioral Measure of the Enthusiasm Gap in American Elections Seth J. Hill April 22, 2014 Abstract What are the effects of a mobilized party base on elections? I present a new behavioral measure of

More information

Bayesian Combination of State Polls and Election Forecasts

Bayesian Combination of State Polls and Election Forecasts Bayesian Combination of State Polls and Election Forecasts Kari Lock and Andrew Gelman 2 Department of Statistics, Harvard University, lock@stat.harvard.edu 2 Department of Statistics and Department of

More information

Supporting Information Political Quid Pro Quo Agreements: An Experimental Study

Supporting Information Political Quid Pro Quo Agreements: An Experimental Study Supporting Information Political Quid Pro Quo Agreements: An Experimental Study Jens Großer Florida State University and IAS, Princeton Ernesto Reuben Columbia University and IZA Agnieszka Tymula New York

More information

U.S. Abortion Attitudes Closely Divided

U.S. Abortion Attitudes Closely Divided http://www.gallup.com/poll/122033/u.s.-abortion-attitudes-closely- Divided.aspx?version=print August 4, 2009 U.S. Abortion Attitudes Closely Divided Forty-seven percent of Americans identify as pro-life,

More information

Non-Voted Ballots and Discrimination in Florida

Non-Voted Ballots and Discrimination in Florida Non-Voted Ballots and Discrimination in Florida John R. Lott, Jr. School of Law Yale University 127 Wall Street New Haven, CT 06511 (203) 432-2366 john.lott@yale.edu revised July 15, 2001 * This paper

More information

Poverty Reduction and Economic Growth: The Asian Experience Peter Warr

Poverty Reduction and Economic Growth: The Asian Experience Peter Warr Poverty Reduction and Economic Growth: The Asian Experience Peter Warr Abstract. The Asian experience of poverty reduction has varied widely. Over recent decades the economies of East and Southeast Asia

More information