Election outcome – what?!

The news outlets have now spent the last 16 hours finding as many superlatives as they possibly can to describe the election we just witnessed. As Britain went to the polls, the opinion pollsters continually had Labour and the Tories neck and neck:

polls-pre-election

The picture doesn’t make it totally clear (it’s all polls with a smoothed line plotted through), but a visit to, say, UK Polling Report’s polling average makes it more clear.

The outcome, however, is that the Tories polled 36.9% of the vote nationally, and Labour 30.5%, and subsequently the Tories have managed to win a majority, with 330 seats as I write, one more to declare.

The discussion has at least in part centred on why the polls were so wrong. I want to add a minor quip to all of this. As part of my forecasting course I prepared a simple linear regression based forecast model that simply uses nationwide polls alone to predict seat outcomes. I presented it to my students back in March (slide 10), and then also quickly referred to it in a talk at Nottingham Business School on Wednesday (slide 6). Here’s the forecast(s):

model_forecast_2015_080515

Marked on are the outcomes, as they stand. The regression model took each opinion poll with its projected vote share, and also corrected for the number of days until the poll, and whether a party was incumbent. It combined these variables using interaction terms, but remained nonetheless a simple linear regression model. Nothing special.

But it does get Labour’s seat total pretty much bang on. A couple of opinion polls were as optimistic as what’s occurred, but the majority aren’t all that far short of what the Tories ended up with, and certainly this model did predict a much wider gulf between the parties than the naked polls alone did.

What does this say? It probably says that if we corrected polls for their historical performance in predicting seat outcomes, they’re not that far away from what actually happened, in reality. This method does also bias correct polls as well, should they display any bias towards one party or another, and adds a control for an incumbent party – which raises their seat totals, and hence the Tory total being much bigger than Labour despite polling neck and neck.

I’m sure nonetheless that pollsters will get it in the neck, but I thought I’d just point this out…

So who really won the debate? Post-match analysis of public attitudes on Twitter

Immediately after the end of the leaders’ debate, media and political analysts rushed to identify the winners and losers of the event. Various exit polls were cited. Whereas YouGov proclaimed Nicola Sturgeon and Nigel Farage the winners, ICM put Miliband first. And today every part leader seems to celebrate his or her debate victory … of course. While the focus on the party leaders is understandable in the run-up to the election, we should perhaps pause for a minute and reflect back on the messages that were voiced yesterday; perhaps they could tell us a bit more of what ideas are likely to gain public support. Social Media could be useful in this respect. As we have already noticed (see the previous post on Democracy is Cyber-participation), the TV political debates seem to engage Twitter users. Using the Twitter streaming API to monitor ‘political’ tweets yesterday in real time, we recorded a massive rise in Twitter activity during the debate. The total count of ‘political’ tweets, that is, tweets including specific references to party terms and produced on Thursday 2nd April was 800,350, of which nearly 80% (614,800 tweets) were generated between 7pm and midnight. No doubt, Twitter users were engaging with the debate.

political tweets count

We were, however, interested in the ways in which Twitter users respond to the messages voiced by the individual party leaders and to what extent what was said by the party leaders influenced public attitudes or sentiments. In order to do so, we created a ‘political’ sentiment index. The index is based on evaluative words (mainly adjectives) retrieved from political tweets that we have been collecting over the last two weeks. Each item was given a score: +1 for positive meanings, -1 for negative meanings and 0 for neutral. When doing so, we recognised the fact that certain words may change their evaluative meanings when used in political contexts. Nevertheless, the massive amount of available data allows extracting valuable information even in the presence of semantic inaccuracies and noise. This is the beauty of the data-driven knowledge discovery.

Subsequently, a sentiment score was assigned to all the 600,000 political tweets generated during the debate. In this sense, our analysis is much more comprehensive that the one offered by Demos who considered only tweets which included boos and cheers. The graph below shows the moods in relation to political parties as the debate evolved. Four major topics were discussed including deficit, NHS, immigration and future for young people. The blue lines on the graphs below mark the time slots dedicated to each theme.

 

Sentiment_Major Parties

Twitter Sentiment Index

Sentiment_Other Parties

Twitter Sentiment Index

As can be seen, the support for each party fluctuated depending on the theme. Which messages scored particularly positively in the eyes of the public? NHS policy of Labour and LibDems seem to have scored well. 40 minutes into the debate, Ed Miliband outlines his plans on how to finance the NHS and following this statement, Labour reaches the peak of positive evaluation. Conversely, UKIP should seriously re-think its NHS policy; stigmatising HIV patients is not going to win public support, though UKIP’s views on immigration seemed to do the trick. SNP appears to be mostly positively evaluated. Having said that, certain messages seem to have been particularly endorsed. Nicola Sturgeon’s appeal for a rational debate on immigration (21:02) and her personal statement about free education that enabled her to be where she is (21:32) won massive support, as does her final statement, in which she outlined SNP as an alternative to Westminster.

The following two word-clouds have been generated with the frequent words found in the tweets associated with SNP and Nicola Sturgeon during the two main periods of Twitter popularity. These are the two periods with highest Political Sentiment Index and appear to have been inspired by Nicola’s key statements on immigration and education, respectively, at 20:55 and 21:35. And these are the messages that appeared to be the winners of the leaders’ debate.

Word Cloud1

Word-cloud for SNP tweets from 21:02 to 21:12

Word Cloud2

Word-cloud for SNP tweets from 21:40 to 22:00

How Accurate are Constituency Polls?

An additional source of data to calibrate forecast models for the forthcoming general election this time around is the sudden abundance of constituency level polls, almost exclusively thanks to Lord Ashcroft.  This undoubtedly is an awesome resource, but there’s at least two problems:

  1. Some of them must be inaccurate, writes Stephen Tall: On the basis that 1 in 20 statistical tests will produce an error if we choose a 5% level of significance, so 1 in 20 polls, statistically speaking, must be wrong. Hence with close on 200 constituency polls thus far, at least 9 must be wrong – which ones, though?
  2. How do we calibrate constituency polls into forecast models? In order to do so, we need some historical precedent – a previous election, for example.

As with Stephen Tall’s article, I don’t wish to reduce the importance of, and the welcome addition of Ashcroft’s polls. However, I do wish to try and dig a little deeper into both of these questions.

The only historical precedent we have for Ashcroft’s polls are by-elections, where we know the outcome. Wikipedia’s page on constituency polling, which can with a little bit of pain be turned into a use-able spreadsheet, and marshalled for this purpose.

There have been six by-elections for which constituency polling was carried out in this parliament: Clacton, Eastleigh, Heywood and Middleton, Newark, Rochester and Strood, and Wythenshawe and Sale East. For these by-elections we can plot the opinion poll vote share against actual vote share each party received in the by-election.

By-Election Opinion Polls and Outcomes

The 45-degree line represents a polling ideal: opinion poll vote shares are exactly equal to outcomes. Clearly this is unrealistic for every poll, but pollsters must aim to be near to this line, assuming voting intent does not change between the polling date and election date. Points above the line show that a party got more votes on election day than they were polled to, while points below suggests they got fewer.

Plots are undoubtedly informative, but quantifying potential biases needs more serious statistical work; a linear regression of by-election vote shares on poll shares can reveal the extent to which polls may be biased towards or against particular parties.

The purple dots above the 45 degree line are indicative of a downward bias in polls for Ukip’s vote share; linear regression analysis shows that this is significant, and represents about six polling points: Ukip’s actual vote share in these by-elections was six points more than it was polled to get. Hence pollsters under-estimated Ukip support. Equivalently, Labour’s red dots are generally below the line; pollsters over-estimated Labour’s vote share by three points in these by-elections.

Now, to some extent, it can be argued by-elections are not representative of reality since they often constitute protest votes by fed up voters. And these two biases (the rest are insignificantly different from zero) definitely suggest a protest vote away from the major party (Labour) to the fringe party (Ukip). But were this to be the case, it should be that pollsters pick up this sentiment when polling likely voters?

Nonetheless, this mini-analysis does suggest that, by and large, constituency polling is accurate – deviations from the 45-degree line are marginal at best (except for Labour and Ukip)…

Have the bookies adjusted for Ashcroft?

Last Wednesday social media was ablaze with Lord Ashcroft’s latest set of Scottish polls, which suggest that Labour are still on course for a Scottish wipeout on May 7. Has this affected what the bookies have to say?

As before, we look at mean implied probabilities for bookmakers, and this time consider the markets for banded ranges of seats for Labour. The impact of worse than previously anticipated polling in Scotland ought to be reflected in a lower seat expectation than previously. Betfair, Bet365, SkyBet, Ladbrokes and William Hill report markets on bands of seats a party wins at the election, and the bands are

  • less than 200 seats
  • 201-225 seats
  • 225 seats and under
  • 226-250 seats
  • 251-275 seats
  • 276-300 seats
  • 301-325 seats
  • 326-350 seats
  • 326 seats or over
  • 351-375 seats
  • 351 seats or over
  • 376-400 seats
  • 401 or more seats

Clearly the options towards the bottom of that range are hugely unlikely (bookies rate anything above 375 seats as less than 5% likely to happen), but it’s the upper half of the range when the action has been:

lab_bands

The black vertical line is March 4, when the Ashcroft Polls were released. Hence prices have moved since the announcement, but with the range 276-300 falling in likelihood only from 35% to 34% and 301-325 from 23% to 21%, the impact doesn’t appear to have been dramatic. Lower seat totals like 251-275 increased from 28% to 32%. Less likely events saw bigger moves, with 326-350 seats falling from 12.5% to 7% today.

Overall, the numbers would appear to suggest that the Ashcroft polls are reinforcing the current trends, at least in terms of bookmaker prices; an update to the plot of bookmaker implied probabilities for most seats from two weeks ago emphasises this:

most_seats_2015-03-09

More election forecasts

Today the phenomenon that is FiveThirtyEight has joined the UK General Election fray, announcing it’ll be running a forecast. FiveThirtyEight, or perhaps more so its head Nate Silver, is well known for his forecasting prowess, particularly in US election, but as detailed in the linked article, his forecast of the 2010 UK General Election went awry somewhat. It sounds like he’s taking things a lot more seriously this time around, which will be very interesting to see.

In the interview style of the linked post, Silver talks about the issue of going from polls to seats, and how well it works – as in, not particularly well. Which is why I’m still surprised that my simple linear regression model of polls since 1970 did as well as it did (and accounts for 90% of the variation in historical data). Unlike the other forecasts of the outcome that Silver refers to, that model actually points towards a Tory majority on May 7.

It’s a really basic model, however, and has none of the basic ingredients we would want to include in a proper election forecast model. But it certainly provides an interesting alternative forecast…

Can Polls Predict Seats?

Opinion polls are increasingly common; UK Polling Report lists 125 polls for both the 1974 elections combined, between 1970 and 1974, while the same website lists 1868 for the 2015 election, with still 67 days to go. However opinion polls only report the vote share implied by the surveying of the pollster; while undoubtedly vote share will influence election outcomes, anomalies are still possible. For example, in February 1974 the Conservatives won more votes yet fewer seats than Labour, an outcome it is reported the Tories are preparing for this time around.

Seats matter more, and trends vary across the country for the different parties; many Tory heartlands are supposedly at threat from Ukip, while Labour’s Scottish seats appear lost to the SNP. Constituency polling, led it seems by Lord Ashcroft since 2010, is viewed as the way forward. Despite this, traditional nationwide polls continue to attract attention – not least the current neck-and-neck nature of Labour and the Tories. Can we glean anything from such polling?

Is there any kind of relationship between how much a party is polling, and how many seats they can expect to get? There naturally is, but the more pertinent question is how strong and robust is that relationship over the years? We focus on polls since 1970, hence 10 elections and 8,253 polls. We include information on the time horizon until the election (number of days), the political party (in case of any biases), and consider any kind of incumbency effect, and we interact all these variables together in a linear regression model in order to see whether the resulting model had any explanatory power. Surprisingly enough, it manages to account for almost 90% of variation in seats won, and I’m happy to provide any interested party with the regression output (I’m working on tidying up a more general set of code for this).

A few plots to help:

Poll Shares and Seats

Firstly this is a cross plot between actual opinion poll shares and election outcomes in terms of seats – the black circles signify such points. They show the Lib-Dem cluster below 100, and the Labour/Tory cluster spread between 200 and 400. It suggests something of a non-linear relationship in the Labour/Tory cluster, since polling shares in the 50s are consistent with seat totals of 300 and 400. The red apparent scribbles are the fitted values, or predicted values, from our linear regression model. They show that within the sample, to some extent, the two clusters are captured. Clearly improvements are possible, but it’s a reasonable model to begin with.

How did the model fare in 2010? Here’s the implied seat forecasts from each poll against the actual seat totals:

Model forecasts in 2010

Note this is the same model estimated over all polls prior to the 2005 election – any poll for the 2010 election is excluded in order that this is actually a forecasting exercise rather than an in-sample fitting exercise. The resulting forecasts for seats are quite surprising in that Labour were forecast as the election neared to get more seats. This appears to be the incumbency effect, which is strongly significant in the regression model. On average, forecast errors were about 11 seats for this election, but clearly biased up for Labour, and down for the Tories and Lib Dems. The same was true, but on a smaller scale, in 2005.

What does all of this mean for 2015? It’s harder to tell whether there’s an incumbency effect since there is no comparable full coalition term; the model automatically attributes this to the Conservatives. The forecasts look like:

Seats forecast 2015

Hence unlike much of the current media narrative (e.g here, here), this very simple model appears to point towards the Tories not just being the largest party in a hung parliament, but potentially winning a majority; the most recent polls indicate based on historical data that the Tories will win somewhere between 280 and 300 seats, but Labour only 220-240. Could the incumbency effect be too strong here, as it appears to have been in forecasts in 2010?

There are no Ukip forecasts since there is no recent historical precedent for Ukip, hence no data upon which to base a forecast. We could apply the model, estimated over Lib-Dem historical performance, to Ukip, but there’s only so far one should take a very basic statistical model such as this. On the most important question of most seats between Labour and the Tories, it has already provided a thought provoking forecast.