As UK election predictions falter, what good are polls?

Britain's Liberal Democrat Party Leader Nick Clegg gestures at a meeting with party activists at the Devonshire Arms pub in Sheffield, England, Friday April 24, 2015.  (AP Photo/Jon Super)

FiveThirtyEight, which earned its data stripes when it was just Nate Silver nailing poll results, has issued a mea culpa in the wake of the big miss in the British election yesterday.

The organization is promising to figure out what went wrong and fix it.

It’s our job as forecasters to report predictions with accurate characterizations of uncertainty, and we failed to achieve that in this election. We are not trying to make excuses here; we are trying to understand what went wrong. If we can find a few clear methodological culprits, that enables us to do better next time.

Here’s what we take from all this:

We need to include even more uncertainty about the national vote on election day relative to the polls.

Constituency polls may work better when based on the standard generic voting-intention question.

We will surely learn more as we dig further into the results. The pollsters are also trying to figure out what went wrong, so we’re not the only ones with our work cut out for us before the next election! Whenever that election arrives, we’ll aim to be ready with better forecasts than we had in 2015.

Leonid Bershidsky, a Bloomberg View columnist, suggests that people who bet on these things have a better time of it than pollsters.

In other words, when there’s money on the line, people take special care to get it right.

Besides, gamblers try to answer a different question than the one pollsters usually ask: not “Who will you vote for?” but “How do you think most people will vote?” The answer to the first question can be a spur-of-the-moment emotional response and therefore subject to change. The second one requires a reflection and considered judgment.

It really works. In the 15 U.S. presidential elections between 1884 and 1940, before scientific polling arrived, the mid-October betting favorite won 11 times, and the underdog only once; the remaining three races (the oldest ones) were practically photo finishes.

In modern history, there are well-documented cases of bookies triumphing over pollsters, which led Justin Wolfers of Stanford University and Andrew Leigh of Harvard to suggest that the press covering Australian federal elections “may have better served its readers by reporting betting odds than by conducting polls.”

Economics professor Leighton Vaughan Williams writes much the same thing today. “Basically, when the polls tell you one thing, and the betting markets tell you another, follow the money,” he says.

The CEO of YouGov, which published daily polls, apologized for their inaccuracy today. At the same time, Stephan Shakespeare’s analysis, reported by the Daily Beast, illuminated a big problem with polls in the first place: People vote based on them.

In an election that was so nakedly tactical from the start, the polls themselves would have been crucial in informing people about which party to lend their vote to in a given constituency. The Conservatives also focused on dire warnings about the fallout from a Labour government supported by the Scottish National Party. It is thought that may have helped move undecided Liberal Democrats into the Conservative camp.

Shakespeare conceded that the polls, which may or may not have been accurate, could have affected late swings themselves. “It may be that the polls actually have an effect on how people vote. I don’t want to give the impression that we were sort of right. I’m not saying that at all. But you could easily think that maybe it was actually very close and people didn’t want the SNP and that swung people toward the Conservatives and especially those UKIPers,” he said.

YouGov’s president, Peter Kellner, took a different tack. He blamed politicians for relying on polls in the first place, and adjusting their platforms to whatever the polls suggested.