The Silver Standard, Part 3: The Reckoning

One of the accusations most commonly levelled against Nate Silver and his enterprise is that probabilistic predictions are unfalsifiable. “He never said the Democrats would win the House. He only said there was an 85% chance. So if they don’t win, he has an out.” This is true only if we focus on the top-level prediction, and ignore all the smaller predictions that went into it. (Except in the trivial sense that you can’t say it’s impossible that a fair coin just happened to come up heads 20 times in a row.)

So, since Silver can be tested, I thought I should see how 538’s predictions stood up in the 2018 US House election. I took their predictions of the probability of victory for a Democratic candidate in all 435 congressional districts (I used their “Deluxe” prediction) from the morning of 6 November. (I should perhaps note here that one third of the districts had estimates of 0 (31 districts) or 1 (113 districts), so a victory for the wrong candidate in any one of these districts would have been a black mark for the model.) I ordered the districts by the predicted probability, to compute the cumulative predicted number of seats, starting from the smallest. I plot them against the cumulative actual number of seats won, taking the current leader for the winner in the 11 districts where there is no definite decision yet.

Silver_PredictedvsActual

The predicted number of seats won by Democrats was 231.4, impressively close to the actual 231 won. But that’s not the standard we are judging them by, and in this plot (and the ones to follow) I have normalised the predicted and observed totals to be the same. I’m looking at the cumulative fractions of a seat contributed by each district. If the predicted probabilities are accurate, we would expect the plot (in green) to lie very close to the line with slope 1 (dashed red). It certainly does look close, but the scale doesn’t make it easy to see the differences. So here is the plot of the prediction error, the difference between the red dashed line and the green curve, against the cumulative prediction:

Silver_PredictedvsError

There certainly seems to have been some overestimation of Democratic chances at the low end, leading to a maximum cumulative overprediction of about 6 (which comes at district 155, that is, the 155th most Republican district). It’s not obvious whether these differences are worse than you would expect. So in the next plot we make two comparisons. The red curve replaces the true outcomes with simulated outcomes, where we assume the 538 probabilities are exactly right. This is the best case scenario. (We only plot it out to 100 cumulative seats, because the action is all at the low end. The last 150 districts have essentially no randomness. The red curve and the green curve look very similar (except for the direction; the direction of the error is random). The most extreme error in the simulated election result is a bit more than 5.

What would the curve look like if Silver had cheated, by trying to make his predictions all look less certain, to give himself an out when they go wrong? We imagine an alternative psephologist, call him Nate Tarnished, who has access to the exact true probabilities for Democrats to win each district, but who hedges his bets by reporting a probability closer to 1/2. (As an example, we take the cumulative beta(1/2,1/2) distribution function. this leaves 0, 1/2, and 1 unchanged, but .001 would get pushed up to .02, .05 is pushed up to .14, and .2 becomes .3. Similarly, .999 becomes .98 and .8 drops to .7. Not huge changes, but enough to create more wiggle room after the fact.

In this case, we would expect to accumulate much more excess cumulative predicted probability on the left side. And this is exactly what we illustrate with the blue curve, where the error repeatedly rises nearly to 10, before slowly declining to 0.

SilverTornished

I’d say the performance of the 538 models in this election was impressive. A better test would be to look at the predicted vote shares in all 435 districts. This would require that I manually enter all of the results, since they don’t seem to be available to download. Perhaps I’ll do that some day.

The Silver Standard: Stochastics pedagogy

I have written a number of times in support of Nate Silver and his 538 project: Here in general, and here in advance of the 2016 presidential elections. Here I want to make a comment about his salutary contribution to the public understanding of probability.

His first important contribution was to force determinism-minded journalists (and, one hopes, some of their readers) to grapple with the very notion of what a probabilistic prediction means. In the vernacular, “random” seems to mean only a fair coin flip. His background in sports analysis was helpful in this, because a lot of people spend a lot of time thinking about sports, and they are comfortable thinking about the outcomes of sporting contests as random, where the race is not always to the swift nor the battle to the strong, but that’s the way to bet. People understand intuitively that the “best team” will not win every match, and winning 3/4 of a large number of contests is evidence of overwhelming superiority. Analogies from sports and gaming have helped to support intuition, and have definitely improved the quality of discussion over the past decade, at least in the corners of the internet where I hang out.*

Frequently Silver is cited directly for obvious insights like that an 85% chance of winning (like his website’s current predicted probability of the Democrat’s winning the House of Representatives) is like the chance of rolling 1 through 5 on a six-sided die, which is to say, not something you should take for granted. But he has also made a great effort to convey more subtle insights into the nature of probabilistic prediction. I particularly appreciated this article by Silver, from a few weeks ago.

As you see reports about Republicans or Democrats giving up on campaigning in certain races for the House, you should ask yourself whether they’re about to replicate Clinton’s mistake. The chance the decisive race in the House will come somewhere you’re not expecting is higher than you might think…

It greatly helps Democrats that they also have a long tail of 19 “lean R” seats and 48 “likely R” seats where they also have opportunities to make gains. (Conversely, there aren’t that many “lean D” or “likely D” seats that Democrats need to defend.) These races are long shots individually for Democrats — a “likely R” designation means that the Democratic candidate has only between a 5 percent and 25 percent chance of winning in that district, for instance. But they’re not so unlikely collectively: In fact, it’s all but inevitable that a few of those lottery tickets will come through. On an average election night, according to our simulations, Democrats will win about six of the 19 “lean R” seats, about seven of the 48 “likely R” seats — and, for good measure, about one of the 135 “solid R” seats. (That is, it’s likely that there will be at least one total and complete surprise on election night — a race that was on nobody’s radar, including ours.)

This is a more subtle version of the problem that all probabilities get rounded to 0, 1, or 1/2. Conventional political prognosticators evaluate districts as “safe” or “likely” or “toss-up”. The likely or safe districts get written off as certain — which is reasonable from the point of view of individual decision-making — but cumulatively a large number of districts with a 10% chance of being won by the Democrat are simply different from districts with a 0% chance. It’s a good bet that the Republican will win each one, but if you have 50 of them it’s a near certainty that the Democrats will win at least 1, and a strong likelihood they will win 8 or more.

The analogy to lottery tickets isn’t perfect, though. The probabilities here don’t represent randomness so much as uncertainty. After 5 of these “safe” districts go the wrong way, you’re almost certainly going to be able to go back and investigate, and discover that there was a reason why it was misclassified. If you’d known the truth, you wouldn’t have called it safe it all. This enhances the illusion that no one loses a safe seat — only, toss-ups can be mis-identified as safe.

* On the other hand, Dinesh D’Souza has proved himself the very model of a modern right-wing intellectual with this tweet: