3 min

Why Liberals Should be Terrified Right Now - How a Trump Surge Can Sneak Past the Prediction Models

Published on October 28, 2020

This article explains how a Trump surge can show up in the polling data, but be missed by election prediction models. Rather than treat polling data in aggregate, I split the pollsters into two groups:

  1. Pollsters who showed a 2-point swing towards Trump from Clinton in October 2016.
  2. Pollsters who didn't. 

I find that they have quite different things to say about what has happened this month, in the Biden Trump race, and this might cast doubt on models which indicate a Biden victory is almost certain.   

Election modeling

To get a feel for the baseline predictions, I begin by encouraging you to play around with some election models, such as the Economist's interactive model. Professor Andrew Gelman created the Economist's election model, and is an authority in this area. He was one of a few to call Trump's win last time around, using advanced Bayesian techniques for teasing information from noisy unrepresentative polls. You might also like to explore this scenario generator that uses the Economist's model created by Robert Fernholz and Ricardo Fernholz. 

Nate Silver's election model is also well known, though more opaque. I cannot discern the generative model and therefore cannot form a judgement. The site says it uses 40,000 scenarios, but it is less clear why these cannot be provided.  

The models are quite different. Gelman's model suggests a 95% chance of a Biden victory, whereas Silver assigns 88%. Gelman's articles have suggested that some modelers may be hedging, by shrinking their estimates towards one half, due to the perceived "payout" in the form of reputation damage should Trump win. But betting markets are shrunk even more towards the coin flip. You can still receive a 50% return on your investment should you back Biden, which, based on either Gelman or Silver's model, looks like a steal - though this would be a good time to emphasize that nothing in this post should be construed as investment advice of any kind.

So, with Gelman at 95%, Silver at 88% and betting exchanges at 66%, what could explain the huge differences? No considering human can fail to be shocked by these numbers. If you were at the racetrack and a horse (in this case Trump) was at odds of 2/1, yet at least one model said it should be 19/1, you would demand some explanation. 

A Trump bump in disguise?  

Here's the plot that should send shivers down the spine of any Trump fearing liberal (or Trump fearing conservative), and to a lessor extent, anyone providing a Biden win estimate in the high nineties. It is a histogram showing different estimates of a Trump surge, by different polling organizations. Pretend for a moment that only the orange data is accurate.  

scary_plot_bigger_legend

Orange bars represent "prescient" pollsters and blue bars represent "deceived" - again in reference to the 2016 race. The x-axis represents changes in Trump's lead over Biden in October 2020. So for example, according to one pollster on the far right, Trump has gained 15 points (or Biden lost, or some combination of the two) in the last month. We'd be inclined to ignore outliers like this on both ends, naturally. 

But what is harder to ignore is the predominance of orange pollsters to the right and, conversely, blue to the left. The prescient pollsters (orange) are exactly those pollsters who picked up on the swing from Clinton to Trump in 2016, whereas the deceived pollsters (blue) were those who did not. Now are you scared? 

In computing the change in relative popularity, data is collated for polls ending between September 1st, 2020 and the time of writing, which is October 27th or, to be even more precise, the most recent update in Professor Gelman's repository. In any case, you can run the code with updated data at any time using this notebook. I personally will be very keen to see updates to the world's most important, if obscure, CSV file that holds the polling data, and subsequent changes to the plots. 

The definition of prescient, and deceived, deserves more precision, of course, though it is straightforward. We start with 260 pollsters who were included in the database for the 2016 presidential election. We eliminate a poll record if the start date was after October 1st, 2016, or if the poll's end date was after November 8th, 2016. We then throw out all pollsters conducting less than two polls meeting this criteria.

For those remaining (only 71 of the original 260), we take the earliest and the latest poll numbers, as judged by the poll end dates. We define the swing to be the difference between the candidate differential (Trump minus Clinton) for the latest and earliest poll (i.e. a difference of a difference). If the pollster showed a swing towards Trump of at least two points, the poll was deemed to be prescient.

Of the 71 pollsters, this was true of roughly half - or 33 to be exact. Other pollsters were labelled as "deceived". Examples of prescient pollsters included the well known, such as the Washington Post and the NBC/Wall Street Journal/Marist poll but also lessor known polls (as least to your author) such as the Loras College poll and the Times-Picayune. Examples of deceived pollsters included the Fox 5 Atlanta poll and CBS. 

How the scary picture roars to life

I deliberately looked at the accuracy of changes in polls, not their absolute accuracy, because I was looking for something that would probably not be picked up by the election forecasting models. Or not to the extent that it should be, anyway. 

Let's start with the assumption that all polls are unrepresentative. This is not a criticism. Indeed an article by Wang, Rothschild, Goel and Gelman goes to pains to make the point that one should not limit the number of respondents drastically in the name of a perfect design. The authors suggest that Xbox polls, though highly unrepresentative, were powerful predictors of the 2012 presidential election. 

By accident of design, one type of polling methodology might happen upon a demographic that is particularly important - for instance a group of people more likely to change their view than another. It is not inconceivable that even a poorly performing poll (one considered biased or noisy) might nonetheless pick up on changes in voting patterns, even if the overall poll results are way off.

So then this happens:

  1. The Trump swing is in the data (orange bars).
  2. The models don't differentiate orange from blue.
  3. The models might even downplay orange data, because some or all of those polls are less accurate in predicting the level of support for Trump, even though, probably mostly by accident, they are good at predicting changes in the support for Trump (or Biden).

This is a hypothetical. The poll data is sparse. There isn't enough to conclude that it is the case. When you throw out pollsters who haven't survived from the 2016 election to the 2020 election, it gets even sparser. 

But you can feel free to worry if you like, or hope, depending on your affiliation. 

Is this effect in the Economist model?  (No)

The Bayesian approach taken by Gelman can accommodate this stylized fact because the methods for inference are very general and they can allow for pretty much any model of reality - a model partially revealed by polling. However, whether the particular generative model (as they are called) used in the Economist model does is another matter. This model includes non-polling factors, but the part leaning on polling data is described in Drew A. Linzer's paper Dynamic Bayesian Forecasting of Presidential Elections in the States. 

In Linzer's model, voter preference is driven by federal and state-level factors only. The probability of voting Democrat is modeled as a deterministic function of a sum of federal and state factors. In term, the state factors are assumed unknown but drawn from a normal distribution whose mean is inferred from polls. The voter preference trend is modeled, so there is a possibility of state level effects (no doubt sampled differently by the various polls I have looked at and classified as prescient) carrying forward. However, it is harder for me to see how an inaccurate poll with accurate deltas would not be down-weighted significantly in the Bayesian posterior. 

Is this effect in the 538 model?  (No)

Nate Silver's model is not described with the same precision. However, we read in the explanation provided that the degree of change in polls is estimated with the following considerations in mind:

  1. Economic uncertainty
  2. Volume of news
  3. Hidden uncertainty in election day results

The third item would benefit from a little more specificity, but I don't think we'll find anything there to make my plot any less scary. The 538 poll averaging, described here, is the more likely place to find the effect I am seeking. However, it states, as one might anticipate, that polls are combined based on their ability to predict the level of support for candidates (i.e. accuracy in the usual sense). 

The way we calibrate various settings in the polling averages — such as how aggressive they are in responding to new data — is mostly based on how well the polling average predicts future polls,3 not how well they predict the outcome of the race.4

While reasonable, this does not come close to accounting for the scenario I have laid out. 

What's going to happen?

I surely don't know. Other factors, also not captured in the models, might augur in favor of Biden instead - such as higher turnout in important, left leaning demographics. 

However this post, brief as it is, has explored one reason why probabilities of a Biden win might be less than the Economist model suggests, and quite possibly less than the 538 model also. It may not get us down to 66%, which is where the betting markets rest currently, but it has to be remembered how good markets are at accumulating information, and ideas. If I can spend a few hours looking for something not accounted for by the Economist model, then so can others. I suspect there is plenty more I am missing. 

Disclosure

Dr. Robert Fernholz, mentioned in this article, founded Intech Investments, my current employer. 

Comments