Blog/Analysis

Bi_Focal #17: Why are all the election MRPs so different?

June 14, 2024

A polling explosion is causing signal chaos

Our most recent blogs looked at pollsters’ methodological approach to undecided voters and what Labour’s ‘true’ poll lead was. Today, we are taking another look at poll methodologies and at the signal chaos between the different forms of polling, with a particular focus on recent MRPs.

In today’s volatile environment with numerous political earthquakes in short succession (the SNP’s landslide in 2015, Brexit, Labour’s 2017 election surge, the 2019 election and potentially a Labour landslide next month), we have seen an explosion in polling innovation, with MRP models becoming more common and large-scale seat forecasts employing multiple different methodologies. We theorise that the explosion of new, often highly-divergent polling results is causing signal chaos, overwhelming the average politics watcher. Debates around methodology have even weaved their way into political discourse in a way we have not seen before, with Reform UK leader Nigel Farage claiming that some pollsters ‘want to suppress Reform’ by not initially prompting the party in voting intention questions (for what it's worth, we prompt for Reform and think this is the right approach).

Gone are the days of traditional one-dimensional polls; today polling takes place in three dimensions. Here’s the family tree:

  • Traditional voting intention – the grandparent, or elder statesman of polling concepts. Tried and tested, but may increasingly struggle to understand the modern world.
  • Multilevel regression with post-stratification polls (MRPs) – the new kid on the block. An exciting concept with lofty ambitions, making use of technological advances, but like an excited toddler, comes with teething problems (we’ll cover one of these later).
  • Seat projections – a step-sibling of MRPs, but bearing an increasingly-tenuous relationship to traditional VI polls.
Triangle diagram illustrating the relationship between traditional voting intention, MRP, and seat projections with implicit and explicit connections.

Even within the same company, traditional voting intention and MRP polls are throwing up very different voting intention results, which can be confusing for readers. YouGov’s MRP poll in March showed a headline lead of 17 points for Labour, a sizeable amount smaller than their regular voting intention polls (the two conducted either side of the MRP had leads of 19, 23, 24 and 25 points). As a result, YouGov have now modified their traditional voting intention polls to follow the same logic as their MRPs.

Making sense of the MRPs

For the reasons we outlined in our previous two posts, we think the headline voting intention figures from reliable MRP models are likely to be closer to the ‘true’ picture than many traditional voting intention polls. Both pieces of analysis suggested Labour’s real lead was lower than 20 points at the start of the campaign.

However, this does not mean MRPs are a foolproof method for forecasting election results, either – a poll is only ever as good as its underlying data and the assumptions it makes (or does not make). For example, MRPs have a tendency to 'flatten' out constituency vote distributions, which often results in highly-proportional swings being modelled when parties lose votes, which doesn't necessarily match how votes are actually lost at general elections. Some companies seek to make adjustments for this quirk of MRP through various algorithms — YouGov have coined the term 'unwinding' for this process.

In 1997, despite the Conservatives losing more than 10 points from their 1992 vote, the swings in each seat were not wholly proportional. Generally speaking, swings looked like an average of a proportional swing and uniform national swing (UNS), with UNS the clear baseline in bellwether seats.

This pattern is borne out across multiple elections since the 1970s in which one of the major parties lost votes. The trendline comparing the proportionality of a party’s vote share swing shows that a Uniform National Swing is usually a fair estimate when a party loses a minimal number of votes, but the more votes lost, the more proportional the swing tends to become.

Scatter plot comparing elections from 1974 to 2019 with historical trends, showing uniform national swing versus proportional swing.

Conducting the analysis regionally, we get a similar distribution, but regional data give us a more robust trendline and opens our eyes to a larger range of possibilities. Though the trendline remains very similar, becoming more proportional when more votes are lost, we can see that a fully-proportional swing is not impossible when a party loses a large share of their vote, even if it is a rare occurrence for one of the big three parties (only the Greens and UKIP have seen regional swings below this line).

Scatter plot comparing regional election results from 1974 to 2019, showing uniform national swing and proportional swing trends.

That small red dot you can see on the proportional swing line at around 60% retention is Labour’s result in Scotland in 2015, when the party was all but wiped out north of the border, collapsing from 42 seats to just 1 on a fully-proportional swing. Furthermore, as Peter Kellner notes, the 1945 election also saw a more-or-less proportional swing in Conservative vote share losses.

The four public MRPs published so far in the election campaign disagree on the slope of Conservative vote losses, with both Survation and More in Common going beyond 100% proportionality.

This is where the idea of intercept comes in. In its simplest form, intercept relates to the point at which a line crosses an axis. In the comparison of MRP models, Survation and More in Common show a positive intercept for the Conservative vote share, meaning that these two companies expect the governing party to actually add votes in some of their weakest seats, despite being down 20 points at the national level. This is not to say that these two models are wrong, but they both point towards an inverse relationship between the Conservative 2019 and 2024 vote shares, suggesting that the party’s demographic patterns are not just ‘flattening out’, but reversing.

Four scatter plots comparing Conservative vote share changes in public MRPs from 2019 to 2024 across various pollsters, showing UNS and proportional swing trends.

Adding the slopes of the four MRPs into our historic regional graphic gives the following:

Scatter plot showing Conservative MRP slopes in comparison to regional general election results from 1974-2019, highlighting UNS and proportional trends.

YouGov and Electoral Calculus' model proportionality is fairly close to the historical trend. Survation, on the other hand, is outputting results outside the historical range for one of the big three parties, more in line with the shape of regional Green Party losses in 2017. Such a degree of proportionality for a large party at the national level would be unprecedented in recent political history.

Again, this is not to say that Survation’s model is wrong. However, given the tendency for MRPs to flatten out vote shares between constituencies, we think we ought to be cautious about the result.

Interestingly, the MRPs from More in Common and Electoral Calculus showed a greater than 100-seat difference in Conservative seats but had similar slopes, with the seat difference being a mere function of the Conservative national vote share.

How proportional will the Conservatives' vote losses be?

As we suggested in our previous Bi_Focal, the local election results provided evidence for the idea of proportional swings in the upcoming election. The local election results line up very well with the two of the public MRPs and point towards a general election result more proportional than the historical national trendline suggests, but almost exactly in line with the regional trendline shown above.

Scatter plot comparing local election results with MRPs and historical trends in elections from 1974-2019, showing UNS and proportional swing lines.

As we have shown above, a uniform national swing is not a particularly reliable metric for assessing the shape of vote distributions in change elections. Our semi-infamous ‘cat graph’ shows how much the ‘winning line’ has changed for each party on UNS over time, whereas UNS assumes this line remains static between elections.

Line chart showing the change in popular vote lead required for a Conservative or Labour majority under FPTP from 1979 to 2024.

If we were to adjust the four public MRPs to the local election and proportional reference lines, while maintaining the Conservative national vote share for each, we would roughly see the below seat counts, which generate huge differences in results. Transferring all models to the local elections slope, we still get a 129-seat gap between Electoral Calculus (84 Conservative seats) and More in Common (213 Conservative seats).

Survation

  • Initial: 51 seats*
  • Proportional swing: 115 (+61)
  • Locals slope: 145 (+91)
  • Spread: 91

More in Common

  • Initial: 180 seats
  • Proportional swing: 196 (+16)
  • Locals slope: 213 (+33)
  • Spread: 33

YouGov

  • Initial: 140 seats
  • Proportional swing: 96 (-44)
  • Locals slope: 130 (-10)
  • Spread: 44

Electoral Calculus

  • Initial: 72 seats
  • Proportional: 44 (-28)
  • Locals slope: 84 (+12)
  • Spread: 40

AVERAGE

  • Initial: 111.5 seats
  • Proportional swing: 112.75
  • Locals slope: 143
  • Spread: 31.5

We can also look at the order of seats to assess the different MRPs, arranging each seat from the lowest to highest Conservative vote share. Normalising the Conservative vote share across the different MRPs and comparing to the local election results, we can see that the local elections slope is much more extreme at the top end of the Conservative vote share, which may provide at least some hope for the governing party of avoiding a complete wipeout.

Survation and More in Common’s distributions look almost identical in their relatively flat distributions. Survation’s top normalised Conservative vote share is 39.8%, which would only rank around the 8th percentile (~54th of 631 seats) on the equivalent local election results.

Line graph comparing Conservative vote share across ordered seats in different MRPs vs. local election results.

On the lower end, YouGov and Electoral Calculus track the locals quite closely, but give higher Conservative vote shares in the middle-of-the-pack seats.

While we have explored the possible problems with traditional voting intention polls (Labour’s lead potentially being too high) and MRP polls (the seat-by-seat variance potentially being too low), simply applying a Uniform National Swing to a very large Labour polling lead could end up producing a seat count rather close to the eventual result – perhaps even beating some of the best MRPs.

Focaldata’s first public MRP poll of the campaign will be released in the coming weeks.

Downing Street image

Stay connected

Subscribe to get the Focaldata AI newsletter delivered directly to your inbox.

Subscribe