Some of these questions may never be answered. But it's clear in the wake of the results that The New York Times' (and former DailyKOS blogger) Nate Silver is being heralded as a modern-day oracle, possessing of superhuman knowledge and predictive skills. #NateSilverFacts has taken off on Twitter, generating a list of impressive feats about the Chicago Economics-bred statistician (my favorite? "Nate Silver can recite pi -- backwards.")
Does he deserve the credit? Absolutely! He's been doing this since the 2008 primaries, and while he's always been known in political blogging circles, it's great to see him get some mainstream recognition. That said, equating him to a wizard is sort of problematic to me, not because Silver isn't awesome (again, he is -- his book, The Signal and the Noise, was one of my favorite reads this year), but because it highlights the fact that the rest of us should be doing a lot better.
This whole concept is especially interesting to me, as the novel I'm working on finishing up for NaNoWriMo (uh ... right after this post, I swear) is about a guy who predicts the future with mathematics (sort of akin to Foundation, but more fantastic than science fictional). So ... yeah.
With that in mind, I'd like to present a few reasons why Nate Silver is not a wizard -- and most of these assertions actually come from Silver himself.
The Basic Idea is SimpleNate Silver's model is, by all accounts, a complicated beast. It aggregates polls in a sophisticated manner, weighting them according to previous pollster performance. It also uses economic data and accounts for certain 'bumps' (naming VP candidates, conventions, etc) to come to a conclusion. And as we saw Tuesday night, it's pretty damned accurate. At the presidential level, Silver called 51 out of 51 races correctly.
That's impressive. But how impressive, really? There's something called the Pareto Principle (also referred to in Silver's book as the Power Law Distribution, or 80-20 rule) that can be applied to a large number of endeavors -- the most basic formulation is that 80% of your sales will come from 20% of your customers, or in software, 80% of your bugs will come from 20% of your code.
In political predictions, I'd claim that you can become 80% as accurate as the big guys (Nate Silver, Sam Wang at the Princeton Election Consortium, who also had a fantastic night) with 20% of the work. In fact, I'd claim that the truth is probably something like 90-5 -- 90% as accurate, 5% of the effort.
Can I back that up? Sure. Let's take a look at RealClearPolitics. RCP is a right-leaning poll aggregator started by Steve Forbes. It's simple. Every single state poll* is averaged to get a final number. That's about as easy as it gets, folks. Assuming we don't count things like web design, all we're doing is averaging numbers. I can write that program in less than five minutes. So how did RCP do? Pretty damn well. At least 80% as well as Nate Silver.
As far as I can tell, they called 50 out of 51 races correctly. The one they missed was Florida, which even Nate Silver called a coin flip, and even then, RCP didn't miss it by all that much.
This is not to denigrate Mr. Silver, or claim that he's wasting his time. Instead, it's meant to admonish people who say "Well, sure, but he gets paid to blog and predict full-time. Come on, that's not fair." This stuff is not incredibly hard. It was easy to see that Obama would win if he won Ohio, and as Silver pointed out on Twitter, Obama had lead in something like 98% of Ohio polls in the week before the election.
Predicting Tomorrow is Easier than Predicting Next YearNate's final prediction range for the electoral results -- the president winning re-election with somewhere in the neighborhood of 313 electoral votes -- was fairly accurate (at the time of this writing, it seems likely President Obama will win Florida, netting him a total of 339 EV). Not bad, right? But that's the day before the election. Fivethirtyeight went up in June of 2012, and since then, it's been something of a rollercoaster. While Obama always maintained a lead, the range went up and down dramatically, decreasing to a low of 285 EV after the first debate.**
Is that a problem? Perhaps not. We should always adjust our predictions to account for new data. But at the same time, that adjustment doesn't mean we get to discount problematic predictions. I might predict a sunny morning on Tuesday, but if I see black clouds coming in late Monday night, I'm obviously going to change that prediction and take an umbrella. Doesn't change my initial forecast, however.
We can make judgements about the usefulness of far-out forecasts, of course. To take the weather metaphor even further, predicting rain two weeks in advance is much more impressive than doing so a day in advance, but is it appreciably more useful? Maybe in some cases (taking a vacation?) but probably not most.
So give Nate Silver credit for his final forecast, but keep in mind that the model wasn't a magical prediction machine that foresaw events like the lopsided conventions, Romney's debate performance and Hurricane Sandy. That realization leads us to...
His Model Isn't PerfectFivethirtyeight called every state correctly at the presidential level, but it wasn't all perfection. Some margins were off fairly signficantly. Silver predicted Obama would win Ohio by 3.6 percentage points; he actually won by less than 2. He projected Florida to be a literal tie (though he did think it slightly likelier than not that Obama would take the state); Obama is expected to win by a full percentage point when the counting is finished.
On the Senate level, we see some misses. While most of the states were called correctly, Montana and North Dakota were predicted to be taken by the Republican candidates with a 67% and 93% likelihood, respectively. Democrats won both races.
In fairness, these are probabilistic predictions, not guarantees. If I roll a die and predict I'll roll a number between 1 and 5 with a 83% probability, that doesn't make my prediction incorrect if I roll a 6. And furthermore, Silver includes his uncertainty about his predictions, generally stated as a margin of error.*** But if someone gave Silver 9 to 1 odds on Heidi Heitkamp losing ND based on his model, he could have lost quite a bit of money.
I think Mr. Silver would be the first to admit his model is not perfect. He says as much in his book, predicting that once the media and campaigns start to catch on to his basic methodology, he will probably be outclassed. I'm sure his model will continue to improve in 2014 and 2016. But improvement is definitely possible.
The Bar is LowIn the land of the blind, the one-eyed man is god. Or wizard, or something. Silver's predictions are quite accurate, but at the same time, he doesn't really have substantial competition. Pundits suck. Everyone knows it. Nate Silver himself knows this -- in his book, he performs a study which concludes that predictions made by political pundits (in this case, on The McLaughlin Group) are no more accurate than a coin toss. And while he doesn't make any strong claims as to why this is, I think it's pretty clear that it's not just laziness -- it's that there's no incentive for a pundit to be accurate, as the political parties pay them to toe the party line, and the media facilitates it in the name of being "fair and balanced" and "hearing both sides of the story."
But imagine we lived in a world where campaigns readily accepted polling data (whilst recognizing that no individual poll or polling organization or going to be perfect). Imagine we lived in a world where pundits like Dick Morris, who is renowned for poor predictions and forecasted a Romney landslide, and Jennifer Rubin, who had been predicting a Romney win for ages, then after the election, straight up admitted to lying about it all, were fired and never listened to again.
In that world, Nate Silver would be a pretty average fish in a big pond, I would think. As it stands now, he's a trout sitting at the top of a bucket of dead minnows.
In Conclusion: Nate Silver is awesome, but that's no excuse for others not to be.Really, the whole point of this post is not to take anything away from, or even bolster, Mr. Silver's analysis. He has plenty of detractors, defenders, and judging from his sales post-election, money. What I do want to get across, however, is that the rest of us, and the media in particular, should be doing a lot better. Republicans who were utterly shocked by Romney's loss may have bigger problems than who is president -- they might living in a bubble impervious to rational thought. Those Democrats who had the same reaction in 2010, or who in 2012 thought that the House would gain a massive Democratic majority as the populace stood up and loudly rejected conservatism, are similarly in trouble. Even worse are certain segments of the punditocracy, who in the name of ratings, decide to ignore anything that doesn't fit with the narrative they'd like to tell.
Nate Silver does solid work with honest numbers. We should be demanding the same of all our talking heads.
Finally, some advice for the RepublicansYou've been hearing this from pretty much everyone, but allow me to reiterate. Your constant dismissal of Nate Silver (and Sam Wang, and many others) is yet another data point in a worrying trend, namely the refusal of certain higher-ups in your party (and lower-downs in your base) to reject facts. Being an underdog doesn't mean you're going to lose; it means you need to work harder, and be prepared if you fall short. We can argue about the extent of global climate change and the optimal decision for an individual government to make. We can argue about whether gay and lesbian Americans should have the right to marry, as abhorrent as I find even pretending that there's a moral counterargument to that.
But there is no arguing that Barack Obama was the huge favorite to win the 2012 election. There is no arguing that carbon emissions from fossil fuels have exacerbated a problematic greenhouse effect. There is no arguing that sexuality is not something that can be dismissed or changed by praying hard enough.
These are facts, and facts are immutable. Denying them and ignoring them will lead to failure. Always.
* RCP has a habit of excluding certain polls, sometimes with justification, sometimes not. I suspect it would be more accurate if it included everything -- let the right-biased polls be counteracted by the left-biased ones.
** FiveThirtyEight also included a daily "NowCast," a prediction of the results if the election were to have been held that day. If Silver's model was 100% perfect, I'd expect the NowCast to change substantially each day, while the Election Day forcast would stay completely same. Obviously, no model is perfect.
***One of the funny things about margins of error is that, though uncertainty is a sign of an honest prediction, they can be abused. I don't think this is the case with Nate Silver (though his +-70 EV margin might be viewed as a large range), but one can easily see how this could be the case in general. It's not really fair for me to predict an earthquake next year centered in downtown Los Angeles with a 3000-mile margin of error, and then claim I called it correctly when something rumbles up in Canada.