## Nate Silver would be more of a guru if Romney wins Florida

Nate Silver is a prediction guru (or perhaps a witch). He compiles data from polling results, that he weights by sample size and measures of historical reliability, to predict the winner of the US presidential election. He calculates the probability that the candidates will win each of the 50 states (and Washington D.C.), thereby determining the probability that each candidate will win a majority of votes in the electoral college, and hence become President of the United States of America.*

Nate Silver received some heat when he predicted Barack Obama was likely (although no certainty) to win the 2012 election. A week or so before the election, he calculated that the probability of Obama winning was ~75%. Some commentators stated the race was tighter than that, perhaps confusing Silver’s predicted chance of winning with the predicted share of the popular vote.

Nevertheless, Silver’s predictions look like falling out as “expected”. The correspondence between the projected outcome based on counted ballots and his prediction is striking, as shown in the maps below.

The map on the right shows Nat Silver’s predicted probabilities for the two candidates on the morning of the 2012 US presidential election. The map on the left shows the seats in which Barack Obama (blue) and Mitt Romney (red) are likely win the electoral college delegates, as projected by the New York Times at approximately 4:30 a.m. eastern US time the day after the election. Florida was apparently the only state still in doubt at that time.

However, it should be borne in mind that Nate Silver’s predictions are probabilistic. For example, some states, such as Florida (predicted probability of Obama winning was 0.503) and North Carolina (predicted probability of Romney winning was 0.744), were not predicted to fall to the favoured candidate with certainty.

If the probabilities were calculated accurately, we would expect that the favoured candidate (i.e., the candidate with the highest probability of winning that state) might lose a state or two.

To illustrate this, think of dice. Let’s say I am rolling a six-sided die. The probability of rolling a 1 is 1/6, and the probability of rolling a number that is not 1 (the numbers 2-6) is 5/6. Think of it as a two horse race between the number 1 and the other numbers. If I were to predict the most likely outcome in a single roll of the die, the favourite would be the numbers 2-6. The number 1 would be the underdog. However, over a large number of rolls of the die, I would start to think I had calculated the probabilities incorrectly if I never rolled a one – the underdog should win sometimes. For example, the probability of getting no 1’s in 24 rolls is only:

$(\frac{5}{6})^{24}=0.0126$

In this case, I would start getting suspicious that the calculated probability for rolling a 1 was wrong if the underdog continued to lose after that many rolls. In fact, we would expect the underdog should win four times out of the 24 rolls (24/6 = 4).

So, do Nate Silver’s predictions stack up? Are they somehow “too good to be true”? Is he a witch? Well, we can treat the outcome of the favourite losing each state as a random event (equivalent to rolling a particular number on the die), with the probability of that event given by Nate Silver’s predictions. Then we can determine the probability that the favourite loses 0, 1, 2, etc of the states. That represents the distribution of the number of states in which the favourite loses. It can be calculated directly, but it is also quite easy to simulate**. For example, here is OpenBUGS code to do it:

model
{
for (i in 1:51)  # for each of the 50 states and Washington D.C.
{
# determine if the favourite loses
favloses[i] ~ dbern(p[i])
# p[i] is the probability that the favourite will lose that state
}

N <- sum(favloses[])  # add up the number of losses
}

In the code, N is the number of states in which the favourite loses. The values of p[i] are the predicted probabilities that the favourite loses, which I took from Nate Silver’s website. Rounded to two decimal places, and ordered alphabetically by the name of the state, these are:

list(p=c(0, 0, 0.02, 0, 0, 0.2, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0.16, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0.02, 0, 0.07, 0.15, 0, 0.01, 0, 0.26, 0, 0.09, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0.21, 0, 0, 0.03, 0))

All of the zeroes are states in which the favourite is predicted to win with certainty (and the underdog is predicted to lose); sometimes the favourite is Obama, sometimes it is Romney, depending on which state it is.

From that code and data, we can simulate the probability distribution for the number of states in which the favourite is expected to lose. Here it is:

Predicted probability distribution of the number of states that are lost by the favourite in the 2012 US presidential election, based on the predicted probabilities of victory from Nate Silver.

So, the probability that the favourite will win in all states is only ~13%. The most likely result is the favourite losing one state, and the loss of two states is also likely. In fact, Nate Silver’s predictions encompass the (unlikely) chance that the underdog would triumph in as many as five states.

Now I’m sure if Florida ends up falling Obama’s way, then the mystique surrounding Nate Silver will grow even larger. And if Florida falls to Romney, then some people will say “That Nate Silver is not so perfect after all!”. His crown might have appeared to slip even further if his predictions had failed in two or three states.

The irony, of course, is that the reverse should be true. Nate Silver’s predictions will be most strongly supported if the favourite loses one or two state. And it would be no surprise if the “favourite” in Florida (Obama) lost that state, where Nate Silver predicted the result was essentially a 50-50 contest. In fact, the outcome of “correctly” predicting the winner in 100% of contests is only the fourth most likely outcome under Silver’s model. “Incorrect” predictions in three states was predicted to be more likely

The potential fallibility of Silver’s predictions if he ended up predicting all states “correctly” was pointed out in a tweet by @Dr24hours: ‘Nate Silver calling 100% of the races correct means his p values are off. He said so himself in 2008. “Perfection” isn’t perfect.’ @Dr24hours’ tweet prompted me to do this analysis and write the post; I wondered if that were indeed true.

However, my analysis above suggests that while predicting all the states “correctly” might (weakly) suggest Nate Silver underestimates the probability that the favourite will win each state, we can’t really invalidate his predictions regardless of who ends up winning Florida. The result will be well within the bounds of probability that was calculated by Nate Silver on the morning of the election.

So, no, he is not a witch, but he does seem to be quite good at calculating probabilities rationally. That would seem to qualify him for the title “guru”, and someone to watch if you want to give yourself a chance of beating the bookies in the next US presidential election.

Edit: I saw a tweet from @skepticscience via @EdYong209 about the website “natesilverwrong.com”, which was taken down after 1 day because it seems “natesilverisright.com”. An archived version of the site is here: http://www.webcitation.org/6Bz2BnE8V – interestingly, the removal of the site was an outcome predicted in one of the comments. The most insightful quote on the site is from Nate Silver himself: “I’m also sure I’ll get too much credit if the prediction is right and too much blame if it is wrong.”

* In US presidential elections, the winner of each state receives a specified number of delegates in the electoral college. The candidate with the most delegates is elected president. OK, Maine and Nebraska are split up into two or three groups of delegates; I’m treating those states in their entirety here for the sake of simplicity.

** Edit 2: @Wikisteff pointed out that I have assumed independence of the outcomes, when in fact correlations are likely. Positive correlations in the victory of the favourite would increase the probability of picking all states correctly. However, we would expect positive correlations in the victory of one candidate; because the favourite differed from state to state, correlations in the victory of the favourite might be smaller. And estimating the degree of correlation would be difficult, so I won’t attempt it (and I wonder if Nate Silver accounts for these correlations).

I conduct research on environmental decision making and quantitative ecology. My teaching is mainly at post-grad level at The University of Melbourne.
This entry was posted in Probability and Bayesian analysis and tagged , , , , , . Bookmark the permalink.

### 4 Responses to Nate Silver would be more of a guru if Romney wins Florida

1. “And estimating the degree of correlation would be difficult, so I won’t attempt it (and I wonder if Nate Silver accounts for these correlations).”

Yes, he does. Actually, you can get a good idea his estimates of correlations by looking at the graph titled “Electoral Vote Distribution” on his blog. The mode at 332 electoral votes (~20%) is his prediction of the actual result, in other words the probability that he gets 0 states wrong if his model is correct. The second highest peak at 303 (~16%) is the probability that he gets only FL wrong. The third highest peak at 347 (~13%) is the probability that he gets only NC wrong. So already we are at a nearly 30% of getting just NC or just FL wrong, comparable to your estimate of getting ANY single state wrong.

I don’t have the actual numbers, but just eyeballing the graph I would say that he gives himself about 60% chance of getting any single state wrong, probably about 20% of getting any two states wrong, 0% of more than two states wrong. Sure, he is more likely to get exactly one state wrong than zero or two, but zero wrong is still not unusual.

2. One thing about his predictions is that I don’t think his electoral counts match up with his state by state counts since I think by his map Obama should have more than the predicted 313 votes.

The other interesting thing is that I think Nate runs Bayesian models (I think) and that his estimates are the posterior distribution medians. What’s funny about all of this kerfuffle around him is that it exposes the different ways people think about probability. I heard one talking head say that Nate was crazy because it’s a 50/50 chance, and other people say that you can never say that Obama has a 90% chance of winning because the election only happens once. Clearly very frequentist notions of probability.

When I think about it people probably use both ideas interchangably without thinking about it much. One might say (in my American example) that the Yankees had a 25% chance of winning the world series this year because there have been 106 world series (trials) and 27 have been Yankees wins (outcomes) 27/106 ~ 0.25. Someone might also say that the Yankees have an 80% chance because of players XYZ, here someone is using specific knowledge to assign confidence to a hypothesis, a much more Bayesian notion of probability. I think that in part is why people get so riled up about Nate, they don’t really have any formalism in their thinking about probability. Interestingly I’ve seen some good pieces that point out he’s only as good as the data he feeds his models so really more credit should go to all the pollsters. But as Nate said himself, if Obama wins he’ll (Nate) get more credit than he deserves and if Obama loses he’ll take more blame than he deserves.

• Thanks for the comment. Yes, I think you are right about people mixing up interpretations of probability. I like your Yankees example.