July 2024

FOI: Which complaints are upheld by the ICO?

Freedom of information requests can be rejected for a range of reasons, but some are much more likely to be overturned by the Information Commissioner’s Office than others.

The details of this are made clear by my analysis of a dataset recently released by the ICO covering nearly 22,000 decisions issued by the information rights regulator since FOI came into force.

For example, the ICO has upheld nearly half the complaints received from information requesters against FOI refusals linked to protecting commercial interests. But it has upheld only one in six objections to refusals based on international relations.

This table shows, for each of the legal grounds for dismissing FOI requests, the number of complaints about that reason which the ICO has ruled on and the percentage which it has upheld (ie backing the requester and overriding the public authority).

Subject matter
(section of FOI Act)
Number of
complaints
Percentage
upheld
The economy (29)2756
Relations within UK (28)1753
Commercial interests (43)101047
Future publication or research (22/22A)21344
Health and safety (38)11942
Policy formation (35)62238
Already accessible (21)33236
Effective conduct of public affairs (36)96735
Audits (33)3834
Confidential information (41)60534
Law enforcement (31)86030
Vexatious or repeated (14)149823
Investigations (30)31821
Personal data (40)309718
Monarchy and honours (37)18118
Defence (26)4117
National security (24)29917
International relations (27)29216
Legal privilege (42)50716
Otherwise prohibited (44)40614
Cost (12)149112
Court records (32)1088
Security bodies (23)3047
Parliamentary privilege (34)120
Source: Martin Rosenbaum, based on ICO data

Or in chart form:

So during FOI’s two decades of operation, the ICO has been much happier to overrule public authorities on matters like commercial interests and policy formation than on topics like defence, security and international affairs.

My analysis uses three spreadsheets with details of ICO rulings which were recently disclosed via the What Do They Know website, in response to a request from Alison Benson. The spreadsheets list the ICO’s formal decision notices from the first one in 2005 until last month.

The ICO maintains that it provided this material voluntarily ‘on a discretionary basis’, arguing that the information would be already available through its routine publication of decision notices.

However the supply of these three files makes the statistical analysis of ICO rulings much more practical than by trying to process all the individually published decisions. The ICO’s release of this dataset is therefore a positive and welcome step in terms of its own transparency.

Environmental information

Note that my analysis excludes environmental information, which falls under a different law, the Environmental Information Regulations. The EIR exceptions do not exactly correspond to the FOI exemptions, so the data cannot be combined.

The numbers of EIR cases are fewer than for FOI, but a similar pattern emerges. Thus the ICO has more frequently overruled public authorities when they base an EIR refusal on commercial confidentiality or the internal nature of communications, rather than when authorities rely say on protecting the course of justice.

Delay

It is also possible to analyse aspects of the dataset in more detailed ways. Here is one example.

This table shows the 15 public authorities against whom the ICO has most often upheld complaints about delay in processing FOI requests (under section 10 of the FOI Act), and how many times this has happened since 2005.

Public authorityUpheld complaints
about FOI delay
Home Office303
Ministry of Justice173
NHS England162
Cabinet Office161
Dept of Health and Social Care84
Metropolitan Police82
Dept for Work and Pensions79
Foreign Office74
Sussex Police74
BBC60
Ministry of Defence58
Dept for Education54
Wirral Council43
Croydon Council39
Information Commissioner’s Office35
Source: Martin Rosenbaum, based on ICO data

On this measure the public authorities with the biggest record of delay since FOI was implemented are the Home Office, the Ministry of Justice, NHS England and the Cabinet Office.

Ironically the authority which comes 15th on this list of shame is the ICO itself! This is clearly a very bad record for an organisation which should be setting a good example of prompt compliance with the law, but at least as a regulator it has been willing to point out its own failings.

Notes: 1) My analysis amalgamates bodies which at some point since 2005 had some change of name or scope but remained essentially the same organisation (eg NHS England with NHS Commissioning Board; Department of Health and Social Care with Department of Health). 2) The ICO is thoroughly and annoyingly inconsistent when naming authorities (eg sometimes using ‘Metropolitan Police Service’ and sometimes using ‘Commissioner of the Metropolitan Police Service’. I hope I have spotted all such instances and combined the figures accordingly, but it is possible I have missed some.

FOI: Which complaints are upheld by the ICO? Read More »

Election prediction models: how they fared

Which predictive model for the results of the election was best – or the least bad?

I say ‘least bad’, because in what may seem like the frequent tradition of the British polling industry, they all overstated how well Labour would do.

However there was also a huge gap between the least bad and the much worse. In a close election discrepancies of this extent would have pointed during the campaign to very different political situations, creating the impression that the forecasting models were contradictory chaos. This level of variation is somewhat disguised by the universal prediction of what could be called a ‘Labour landslide’, now confirmed as fact (even if it isn’t as big as they all said it was going to be).

Labour seats

Let’s look at the forecasts for the total number of Labour seats. This determines the size of Labour’s majority and is the most politically significant single measure of how the electorate voted.

Actual result for Labour seats412
Britain Predicts418
More In Common430
YouGov431
Election Maps432
Economist*433
JL Partners442
Focal Data444
Financial Times447
Electoral Calculus453
Ipsos453
We Think465
Survation**470
Savanta516

I have listed the models which predicted votes for each constituency in Great Britain and were included in the excellent aggregation site produced by Peter Inglesby. (If that means any model is missing which should have been added, my apologies.)

Note that what I am comparing here are the statistical models which aimed to forecast the voting pattern in each seat, not normal opinion polls which only provide national figures for vote share. These competing models are all based on different methodologies, the full details of which are not made public.

The large number of such models was a new feature of this election, linked to the growing adoption of MRP polling along with developments in the techniques and capacity of data science.

On this basis the winner would be the Britain Predicts model devised by Ben Walker and the New Statesman. Well done to them.

This model is not based on a single poll itself, but takes published polling data and mixes it into its analysis. This is also true of some of the others around the middle of the table, such as the Economist and the Financial Times.

On the other hand polling companies like YouGov and Survation base their constituency-level forecasts on their own MRP polls (Multilevel Regression and Post-stratification), combining large samples and statistical modelling to produce forecasts for each seat.

The closest MRP here is the More in Common one, with YouGov narrowly next. However the bottom of the table are also MRP polls rather than mixed models – We Think, Survation and Savanta. (It should be noted that the Savanta one was conducted in the middle of the campaign and so was more vulnerable to late swing).

Constituency predictions

However a different winner emerges from a more detailed examination of the constituency level results. This is based on my analysis using the data aggregated on Peter Inglesby’s website.

Although Britain Predicts was closest for the overall picture, it got 80 individual seats wrong in terms of the winning party. This was often in opposite directions, so at the net level they cancelled each other out. It predicted Labour would win 33 seats that they lost, while also predicting they would lose 26 seats which the party actually won.

In contrast YouGov got the fewest seats with the wrong party winning, just 58. So well done to them. And I’m actually being a bit harsh to YouGov here, as this is counting the 10 seats they predicted as a ‘tie’ as all wrong – on the basis that (a) the outcome wasn’t a tie (haha), and (b) companies shouldn’t get ranked with a better performance via ambiguous forecasts which their competitors avoid. If you do not agree with that, which might be the more measured approach, you can score them at 53.

The two models that did next best at the constituency level were Elections Maps (62 wrong) and the Economist (76 wrong). The worst-scoring models were We Think and Savanta which both got 134 seats wrong.

This table shows the number of constituencies where the model wrongly predicted the winning party.

ModelErrors at seat level
YouGov53
Election Maps62
Economist76
Britain Predicts80
Focal Data80
More in Common83
JL Partners91
Electoral Calculus93
Financial Times93
Ipsos93
Survation100
Savanta134
We Think 134
Source: Analysis by Martin Rosenbaum, using data from Peter Inglesby’s aggregation site.

(I’m here adopting the slightly kinder option for YouGov in the table).

This constituency-level analysis also sheds light on the nature of the forecasting mistakes.

There were some common issues. Generally the models failed to predict the success of the independent candidates who appealed largely to Muslim voters and either won or significantly affected the result. On the one hand it is difficult for nationally structured models to pick up on anomalous constituencies. On the other it is possible that the models typically do not give enough weight to religion (as opposed to ethnicity).

On this point there’s increasing evidence of growing differences in voting patterns between Muslim and Hindu communities. It’s striking that 12 of the 13 models (all except YouGov) wrongly forecast that the Tories would lose Harrow East, a seat with a large Hindu population where the party bucked the trend and actually increased its majority.

The models also failed almost universally to predict quite how badly the SNP would do – ironically with the exception of Savanta, the least accurate model overall.

On the other hand there were also wide variations between the models in terms of where they made mistakes. In all there were 245 seats – 39% of the total – where at least one model forecast the wrong winning party.

The seats that most confused the modellers are as follows.

Seats where all the 13 modellers predicted the wrong winning party: Birmingham Perry Barr, Blackburn, Chingford and Woodford Green, Dewsbury and Batley, Fylde, Harwich and North Essex, Keighley and Ilkley, Leicester East, Leicester South, Staffordshire Moorlands, Stockton West, plus the final seat to declare: Inverness, Skye and West Ross-shire***.

Seats where 12 of the 13 modellers predicted the wrong winning party: Beverley and Holderness, Godalming and Ash, Harrow East, Isle of Wight East, Mid Bedfordshire, North East Hampshire, South Basildon and East Thurrock, The Wrekin.

Overall seats v individual constituency forecasts

So which is more important – to get closest to the overall national picture, or to get most individual seats right?

The statistical modelling processes involved are inherently probabilistic, and it’s assumed they will make some errors on individual seats that will cancel each other out. That’s the case for saying Britain Predicts is the winner.

But if you want confidence that the modelling process is working comparatively accurately, that would point towards getting the most individual seats right – and YouGov.

Note that this analysis is based just on the identity of the winning party in each seat. Comparing the actual against forecast vote shares in each constituency could give a different picture. I haven’t had the time to do that more detailed work yet.

Traditional polling v predictive models

The traditional (non-MRP) polls also substantially overstated the Labour vote share, as the MRP ones did, raising further awkward questions for the polling industry. However, there’s an interesting difference between the potential impact of the traditional polls compared to the predictive models which proliferated at this election.

Without these models, the normal general assumption for translating vote shares into seats would have been uniform national swing. (This would have been in line with the historical norm that turned out to be inapplicable to this election, where Labour and the LibDems benefitted greatly from differential swing patterns across the country.) And seat forecasts reliant on that old standard assumption would then have involved nothing like the massive Labour majorities suggested by the models.

Although the predictive modelling in 2024 universally overstated Labour’s position, it did locate us in broadly the correct political terrain – ‘Labour landslide’. We wouldn’t have been expecting that kind of outcome if we’d only had the traditional polling (even with the way it exaggerated the Labour share).

To that extent the result was some kind of vindication for predictive modelling and its seat-based approach in general, despite the substantial errors. The MRP polls and the models that reflected them succeeded in detecting some crucial differential swings in social/geographic/political segments of the population (while also exaggerating their implications).

However, it’s also possible that the models/polls could in a way have been self-negating predictions. By forecasting such a large Labour victory and huge disaster for the Tories, they could have depressed turnout amongst less committed Labour supporters who then decided not to bother going to the polling station, and/or they could have nudged people over into voting LibDem, Green or independent (or indeed Reform) who were until the end of the campaign intending to back Labour.

Notes

*Note on Economist prediction: Their website gives 427 as a median prediction for Labour seats, but their median predictions for all parties sum up to well short of the total number of GB seats. In my view that would not make a fair comparison. Instead I have used the figure in Peter Inglesby’s summary table, which I assume derives from adding up the individual constituency predictions.

**UPDATE 1: Note on Survation prediction: After initially publishing this piece I was informed that Survation released a very late update to their forecast which cut their prediction for Labour seats from 484 to 470. The initial version of my table used the 484 figure, which I have now replaced with 470. However, despite reducing the extent of their error, this does not affect their position in the table as second last.

Other notes: (1) I haven’t been able to personally check the accuracy of Peter Inglesby’s data, for reasons of time, but I have no reason to doubt it. I should add that I am very grateful to him for his work in bringing all the modelling forecasts together in one place. (2) This article doesn’t take account of the outcome in Inverness, Skye and West Ross-shire, which at the time of writing was yet to declare.

***UPDATE 2: The eventual LibDem victory in Inverness, Skye and West Ross-shire was not predicted by any model, which all forecast the SNP would win. This means that this has to be added to my initial list of those which all the models got wrong, which therefore now totals 12 constituencies.

Election prediction models: how they fared Read More »