FOI: Which complaints are upheld by the ICO?

Freedom of information requests can be rejected for a range of reasons, but some are much more likely to be overturned by the Information Commissioner’s Office than others.

The details of this are made clear by my analysis of a dataset recently released by the ICO covering nearly 22,000 decisions issued by the information rights regulator since FOI came into force.

For example, the ICO has upheld nearly half the complaints received from information requesters against FOI refusals linked to protecting commercial interests. But it has upheld only one in six objections to refusals based on international relations.

This table shows, for each of the legal grounds for dismissing FOI requests, the number of complaints about that reason which the ICO has ruled on and the percentage which it has upheld (ie backing the requester and overriding the public authority).

Subject matter
(section of FOI Act)
Number of
complaints
Percentage
upheld
The economy (29)2756
Relations within UK (28)1753
Commercial interests (43)101047
Future publication or research (22/22A)21344
Health and safety (38)11942
Policy formation (35)62238
Already accessible (21)33236
Effective conduct of public affairs (36)96735
Audits (33)3834
Confidential information (41)60534
Law enforcement (31)86030
Vexatious or repeated (14)149823
Investigations (30)31821
Personal data (40)309718
Monarchy and honours (37)18118
Defence (26)4117
National security (24)29917
International relations (27)29216
Legal privilege (42)50716
Otherwise prohibited (44)40614
Cost (12)149112
Court records (32)1088
Security bodies (23)3047
Parliamentary privilege (34)120
Source: Martin Rosenbaum, based on ICO data

Or in chart form:

So during FOI’s two decades of operation, the ICO has been much happier to overrule public authorities on matters like commercial interests and policy formation than on topics like defence, security and international affairs.

My analysis uses three spreadsheets with details of ICO rulings which were recently disclosed via the What Do They Know website, in response to a request from Alison Benson. The spreadsheets list the ICO’s formal decision notices from the first one in 2005 until last month.

The ICO maintains that it provided this material voluntarily ‘on a discretionary basis’, arguing that the information would be already available through its routine publication of decision notices.

However the supply of these three files makes the statistical analysis of ICO rulings much more practical than by trying to process all the individually published decisions. The ICO’s release of this dataset is therefore a positive and welcome step in terms of its own transparency.

Environmental information

Note that my analysis excludes environmental information, which falls under a different law, the Environmental Information Regulations. The EIR exceptions do not exactly correspond to the FOI exemptions, so the data cannot be combined.

The numbers of EIR cases are fewer than for FOI, but a similar pattern emerges. Thus the ICO has more frequently overruled public authorities when they base an EIR refusal on commercial confidentiality or the internal nature of communications, rather than when authorities rely say on protecting the course of justice.

Delay

It is also possible to analyse aspects of the dataset in more detailed ways. Here is one example.

This table shows the 15 public authorities against whom the ICO has most often upheld complaints about delay in processing FOI requests (under section 10 of the FOI Act), and how many times this has happened since 2005.

Public authorityUpheld complaints
about FOI delay
Home Office303
Ministry of Justice173
NHS England162
Cabinet Office161
Dept of Health and Social Care84
Metropolitan Police82
Dept for Work and Pensions79
Foreign Office74
Sussex Police74
BBC60
Ministry of Defence58
Dept for Education54
Wirral Council43
Croydon Council39
Information Commissioner’s Office35
Source: Martin Rosenbaum, based on ICO data

On this measure the public authorities with the biggest record of delay since FOI was implemented are the Home Office, the Ministry of Justice, NHS England and the Cabinet Office.

Ironically the authority which comes 15th on this list of shame is the ICO itself! This is clearly a very bad record for an organisation which should be setting a good example of prompt compliance with the law, but at least as a regulator it has been willing to point out its own failings.

Notes: 1) My analysis amalgamates bodies which at some point since 2005 had some change of name or scope but remained essentially the same organisation (eg NHS England with NHS Commissioning Board; Department of Health and Social Care with Department of Health). 2) The ICO is thoroughly and annoyingly inconsistent when naming authorities (eg sometimes using ‘Metropolitan Police Service’ and sometimes using ‘Commissioner of the Metropolitan Police Service’. I hope I have spotted all such instances and combined the figures accordingly, but it is possible I have missed some.

FOI: Which complaints are upheld by the ICO? Read More »

Election prediction models: how they fared

Which predictive model for the results of the election was best – or the least bad?

I say ‘least bad’, because in what may seem like the frequent tradition of the British polling industry, they all overstated how well Labour would do.

However there was also a huge gap between the least bad and the much worse. In a close election discrepancies of this extent would have pointed during the campaign to very different political situations, creating the impression that the forecasting models were contradictory chaos. This level of variation is somewhat disguised by the universal prediction of what could be called a ‘Labour landslide’, now confirmed as fact (even if it isn’t as big as they all said it was going to be).

Labour seats

Let’s look at the forecasts for the total number of Labour seats. This determines the size of Labour’s majority and is the most politically significant single measure of how the electorate voted.

Actual result for Labour seats412
Britain Predicts418
More In Common430
YouGov431
Election Maps432
Economist*433
JL Partners442
Focal Data444
Financial Times447
Electoral Calculus453
Ipsos453
We Think465
Survation**470
Savanta516

I have listed the models which predicted votes for each constituency in Great Britain and were included in the excellent aggregation site produced by Peter Inglesby. (If that means any model is missing which should have been added, my apologies.)

Note that what I am comparing here are the statistical models which aimed to forecast the voting pattern in each seat, not normal opinion polls which only provide national figures for vote share. These competing models are all based on different methodologies, the full details of which are not made public.

The large number of such models was a new feature of this election, linked to the growing adoption of MRP polling along with developments in the techniques and capacity of data science.

On this basis the winner would be the Britain Predicts model devised by Ben Walker and the New Statesman. Well done to them.

This model is not based on a single poll itself, but takes published polling data and mixes it into its analysis. This is also true of some of the others around the middle of the table, such as the Economist and the Financial Times.

On the other hand polling companies like YouGov and Survation base their constituency-level forecasts on their own MRP polls (Multilevel Regression and Post-stratification), combining large samples and statistical modelling to produce forecasts for each seat.

The closest MRP here is the More in Common one, with YouGov narrowly next. However the bottom of the table are also MRP polls rather than mixed models – We Think, Survation and Savanta. (It should be noted that the Savanta one was conducted in the middle of the campaign and so was more vulnerable to late swing).

Constituency predictions

However a different winner emerges from a more detailed examination of the constituency level results. This is based on my analysis using the data aggregated on Peter Inglesby’s website.

Although Britain Predicts was closest for the overall picture, it got 80 individual seats wrong in terms of the winning party. This was often in opposite directions, so at the net level they cancelled each other out. It predicted Labour would win 33 seats that they lost, while also predicting they would lose 26 seats which the party actually won.

In contrast YouGov got the fewest seats with the wrong party winning, just 58. So well done to them. And I’m actually being a bit harsh to YouGov here, as this is counting the 10 seats they predicted as a ‘tie’ as all wrong – on the basis that (a) the outcome wasn’t a tie (haha), and (b) companies shouldn’t get ranked with a better performance via ambiguous forecasts which their competitors avoid. If you do not agree with that, which might be the more measured approach, you can score them at 53.

The two models that did next best at the constituency level were Elections Maps (62 wrong) and the Economist (76 wrong). The worst-scoring models were We Think and Savanta which both got 134 seats wrong.

This table shows the number of constituencies where the model wrongly predicted the winning party.

ModelErrors at seat level
YouGov53
Election Maps62
Economist76
Britain Predicts80
Focal Data80
More in Common83
JL Partners91
Electoral Calculus93
Financial Times93
Ipsos93
Survation100
Savanta134
We Think 134
Source: Analysis by Martin Rosenbaum, using data from Peter Inglesby’s aggregation site.

(I’m here adopting the slightly kinder option for YouGov in the table).

This constituency-level analysis also sheds light on the nature of the forecasting mistakes.

There were some common issues. Generally the models failed to predict the success of the independent candidates who appealed largely to Muslim voters and either won or significantly affected the result. On the one hand it is difficult for nationally structured models to pick up on anomalous constituencies. On the other it is possible that the models typically do not give enough weight to religion (as opposed to ethnicity).

On this point there’s increasing evidence of growing differences in voting patterns between Muslim and Hindu communities. It’s striking that 12 of the 13 models (all except YouGov) wrongly forecast that the Tories would lose Harrow East, a seat with a large Hindu population where the party bucked the trend and actually increased its majority.

The models also failed almost universally to predict quite how badly the SNP would do – ironically with the exception of Savanta, the least accurate model overall.

On the other hand there were also wide variations between the models in terms of where they made mistakes. In all there were 245 seats – 39% of the total – where at least one model forecast the wrong winning party.

The seats that most confused the modellers are as follows.

Seats where all the 13 modellers predicted the wrong winning party: Birmingham Perry Barr, Blackburn, Chingford and Woodford Green, Dewsbury and Batley, Fylde, Harwich and North Essex, Keighley and Ilkley, Leicester East, Leicester South, Staffordshire Moorlands, Stockton West, plus the final seat to declare: Inverness, Skye and West Ross-shire***.

Seats where 12 of the 13 modellers predicted the wrong winning party: Beverley and Holderness, Godalming and Ash, Harrow East, Isle of Wight East, Mid Bedfordshire, North East Hampshire, South Basildon and East Thurrock, The Wrekin.

Overall seats v individual constituency forecasts

So which is more important – to get closest to the overall national picture, or to get most individual seats right?

The statistical modelling processes involved are inherently probabilistic, and it’s assumed they will make some errors on individual seats that will cancel each other out. That’s the case for saying Britain Predicts is the winner.

But if you want confidence that the modelling process is working comparatively accurately, that would point towards getting the most individual seats right – and YouGov.

Note that this analysis is based just on the identity of the winning party in each seat. Comparing the actual against forecast vote shares in each constituency could give a different picture. I haven’t had the time to do that more detailed work yet.

Traditional polling v predictive models

The traditional (non-MRP) polls also substantially overstated the Labour vote share, as the MRP ones did, raising further awkward questions for the polling industry. However, there’s an interesting difference between the potential impact of the traditional polls compared to the predictive models which proliferated at this election.

Without these models, the normal general assumption for translating vote shares into seats would have been uniform national swing. (This would have been in line with the historical norm that turned out to be inapplicable to this election, where Labour and the LibDems benefitted greatly from differential swing patterns across the country.) And seat forecasts reliant on that old standard assumption would then have involved nothing like the massive Labour majorities suggested by the models.

Although the predictive modelling in 2024 universally overstated Labour’s position, it did locate us in broadly the correct political terrain – ‘Labour landslide’. We wouldn’t have been expecting that kind of outcome if we’d only had the traditional polling (even with the way it exaggerated the Labour share).

To that extent the result was some kind of vindication for predictive modelling and its seat-based approach in general, despite the substantial errors. The MRP polls and the models that reflected them succeeded in detecting some crucial differential swings in social/geographic/political segments of the population (while also exaggerating their implications).

However, it’s also possible that the models/polls could in a way have been self-negating predictions. By forecasting such a large Labour victory and huge disaster for the Tories, they could have depressed turnout amongst less committed Labour supporters who then decided not to bother going to the polling station, and/or they could have nudged people over into voting LibDem, Green or independent (or indeed Reform) who were until the end of the campaign intending to back Labour.

Notes

*Note on Economist prediction: Their website gives 427 as a median prediction for Labour seats, but their median predictions for all parties sum up to well short of the total number of GB seats. In my view that would not make a fair comparison. Instead I have used the figure in Peter Inglesby’s summary table, which I assume derives from adding up the individual constituency predictions.

**UPDATE 1: Note on Survation prediction: After initially publishing this piece I was informed that Survation released a very late update to their forecast which cut their prediction for Labour seats from 484 to 470. The initial version of my table used the 484 figure, which I have now replaced with 470. However, despite reducing the extent of their error, this does not affect their position in the table as second last.

Other notes: (1) I haven’t been able to personally check the accuracy of Peter Inglesby’s data, for reasons of time, but I have no reason to doubt it. I should add that I am very grateful to him for his work in bringing all the modelling forecasts together in one place. (2) This article doesn’t take account of the outcome in Inverness, Skye and West Ross-shire, which at the time of writing was yet to declare.

***UPDATE 2: The eventual LibDem victory in Inverness, Skye and West Ross-shire was not predicted by any model, which all forecast the SNP would win. This means that this has to be added to my initial list of those which all the models got wrong, which therefore now totals 12 constituencies.

Election prediction models: how they fared Read More »

The art not seen

Suppose you are the lucky owner of a very valuable object which is ‘pre-eminent’ for its historic or artistic interest.

When you die, that might result in a substantial inheritance tax payment. Except that this can be completely avoided, if HMRC agrees that the item constitutes a national heritage asset, and the inheritor is willing to let the British public come and look at it.

And if you are not already the owner of such an artefact, but you can afford it, you could buy one – as a handy method of reducing the tax liability of your estate. Naturally there are legal and financial advisors who will help you do this.

The objects exempted from tax under this law range from a Rembrandt self-portrait to a ‘pair of Chelsea Derby candlestick figures, each of a scantily draped winged cupid kneeling with arm around a floral encrusted bough, rococo scroll base with gilt enrichment, 6 3/4in. high (both with sconces missing, some damages)’.

A full list is published by the government, in a database currently containing over 36,000 entries. Some are on public display – HMRC has told me that about 8,000 are on loan to museums or galleries. But for the others, which is the large majority, how often does anybody actually make use of their legal right to go and see them?

No overall statistics are available to answer this question. However, according to information I have just obtained from HMRC under FOI, there were just 5,521 searches of the database in the last financial year (2023/24).

Obviously the number of actual visits will doubtless be much fewer than the number of database searches, many of which will not lead to any further action. Even though it is possible visitors might see more than one item at a time, it seems very likely indeed that most of these ‘national assets’ – saved for the nation at public expense – are never appreciated by any member of the general public, and certainly not by significant numbers of them.

You can find out what is available, and when and how it can be viewed, by searching the database. In many cases public access must be allowed without a prior appointment for at least a few days each year. Outside these open days, an appointment may be required.

That’s the theory. The practice might be somewhat trickier. As the tax consultancy Ross Martin states: “It seems that few people try and see some of the objects. On a practical level, it is very difficult to gain access to some of the assets. Access in most cases is handled by private client law firms and the links given to open days can be uninformative. Be prepared to be ruthlessly persistent if you wish to see an object or a collection.”

HMRC also informed me that ‘we do from time to time get contacted by members of the public directly to make us aware of any access issues that they have experienced’. But it does not have a central record of receiving any formal complaints about this.

In the 1990s the campaigning comedian Mark Thomas organised coachloads of visitors to attempt to see various artworks involved for a television programme. The law has since been changed, and access should have become easier.

HMRC estimates that this inheritance tax exemption/loophole (according to your personal preference), together with a similar rule for land and buildings, reduces government tax receipts by about £60 million annually.

The art not seen Read More »

Anomalies detected, but every little helps …

Like many laws the Freedom of Information Act has apparent anomalies, which may or may not have been intentional.

It seems very odd, for example, that the FOI process doesn’t let you find out about complaints and other issues which council trading standards departments are pursuing with businesses. I’d expect even people who don’t much like FOI to think that kind of consumer protection information should be publicly available.

But it isn’t, because the Enterprise Act 2002 stops councils from releasing it. After some early legal disputation it was ruled that this legislation trumps the disclosure requirements of the FOI Act. To illustrate, here’s an ICO decision notice about a case relating to a window installation company.

Another anomaly is that obtaining environmental information is not covered by the FOI law, but by a separate set of rules, the Environmental Information Regulations. These are similar to the FOI regime, but not identical, and in my opinion both public authorities and people requesting information are not sufficiently alert to the differences.

These two anomalies are connected, in that I have recently successfully argued that while the Enterprise Act can block the disclosure of material under FOI, it can’t be used to prevent the release of environmental information. The EIR do not allow the legal basis for that kind of refusal.

So Hertfordshire Council have now been forced to send me copies of trading standards emails sent to Tesco about price displays under the planned Scottish deposit return scheme for single-use drinks containers.

(Businesses which operate across multiple locations can deal with just one council as the ‘primary authority’ for trading standards purposes. Tesco’s primary authority is Hertfordshire, where its corporate head office is based. This arrangement extends to Scotland, as – unlike the deposit scheme itself – consumer protection is not a devolved policy area.)

Over several months Hertfordshire Council went through a number of different and implausible arguments while it tried to resist giving me this documentation. It first proclaimed that due to the Enterprise Act disclosure would prejudice the administration of justice; it then moved to saying it would damage the interests of the information provider (ie Tesco); it finally decided to assert that a deposit return scheme for bottles and cans was nothing to do with the environmental issue of recycling – an argument dismissed by the Information Commissioner’s Office, which ruled in my favour.

The emails I have received show that in 2022 the council was telling Tesco that shop price labels would have to state the full price for the relevant bottled and canned products including the deposit, not a price separately without the deposit.

However the implementation of the Scottish scheme (which was beset by controversies) has since been postponed, so this is no longer a pressing concern. As matters now stand, the UK, Scottish and Welsh governments are pledged to introduce a UK-wide deposit return scheme in October 2025. If this goes ahead then the issue of how prices are displayed in order to be fair to consumers will doubtless be widely raised.

Further reading: I give a detailed account of the numerous significant differences between FOI and EIR, and how they affect the process of obtaining information, in my book.

Anomalies detected, but every little helps … Read More »

Charlotte Owen, Ross Kempsell and the secrecy of HOLAC

My attempt to find out what the House of Lords Appointments Commission had to say (if anything) about the award of peerages to Charlotte Owen and Ross Kempsell by Boris Johnson has just been rejected by the Information Commissioner’s Office.

I will now be appealing this to the First-tier Tribunal, on the grounds that in my opinion it is in the public interest for this material to be revealed, despite the view of the ICO.

Last July I made a freedom of information request to HOLAC for the material it held about the two individuals we now know as Lady Owen of Alderley Edge and Lord Kempsell, after their somewhat unexpected appointment to the House of Lords in Johnson’s resignation honours list.

After HOLAC declined to send me anything, I complained to the ICO. My arguments can be summarised as follows:

  • The appointment of members of a law-making assembly, people with substantial political influence and decision-making powers to make laws governing the rest of the population, requires a great degree of legitimacy, and that in turn demands maximum transparency.
  • This is especially true for these two individuals, given (a) their comparative youthfulness means they are likely to hold politically powerful roles for several decades and indeed in due course may well be amongst the longest-serving legislators in the UK’s history; and (b) the widespread public puzzlement and concern as to what they have achieved or what qualities they possess.
  • Issues of propriety (HOLAC’s responsibility here) are an important aspect of assessing suitability for membership of the House of Lords.
  • Disclosure is necessary for the legitimate interests of the general public to understand fully the processes for appointing people who take decisions on behalf of the nation, and for the public to be able to see for themselves whether the processes are adequate.

HOLAC argued:

  • Their process requires confidentiality to ensure that decisions are taken on the basis of full and honest information and that potentially sensitive vetting information can be candidly assessed.
  • The information it already places in the public domain about its working practices provide the public with reassurance that its processes are sufficiently rigorous.
  • In the case of a resignation honours list, its role is limited to an advisory one, notifying the prime minister of whether it has concerns about the propriety of peerage nominations, and does not extend to assessing the overall merits of nominees.

The ICO has upheld HOLAC’s stance. We will now find out what the First-tier Tribunal makes of the rival arguments. It is likely to take several months before the Tribunal decides the case.

I was interested to see last month that the UK Governance Project, a high-powered independent commission with a distinguished membership, drew attention to the problem of lack of transparency at HOLAC. It recommended that HOLAC should always have to publish a citation setting out the basis on which it has approved an individual for appointment.

Charlotte Owen, Ross Kempsell and the secrecy of HOLAC Read More »

Absent on Fridays

Pupils are over 20 per cent more likely to be absent from school on Fridays compared to Wednesdays.

The average rate of absence last term in England’s state-funded schools was 7.5% on Fridays. This compares to 6.7% on Mondays, the next most common day for school absence, and the lower figures for the middle of the week: 6.3% for Tuesdays, 6.2% for Wednesdays and 6.4% for Thursdays.

I have derived these figures by analysing the detailed school attendance data collected and published by the Department for Education.

The issue of school attendance is moving up the political agenda, as levels of absence are now much higher than before the covid pandemic.

The government has today announced what it calls ‘a major national drive to improve school attendance’, with measures targeted at tackling persistent absence. The Labour party is also focusing on the issue this week.

This weekly pattern of absence being highest on Fridays, and second-highest on Mondays, with better attendance mid-week, is a widespread feature of the current school system.

From my analysis of the DfE’s data, it applies in both primary and secondary schools, and also in all regions of England.

It is seen when looking both at authorised and unauthorised absences from school. This includes applying to absence due to illness, which is the most common reason recorded for pupils not attending school.

It was also evident throughout the autumn term, as can be seen in this chart (with a particular peak on the Friday before half-term).

The DfE’s data on school attendance can be downloaded here.

In a previous post I examined how school attendance can be affected by when in the year pupils are born.

Absent on Fridays Read More »

Absence from school and month of birth

For school pupils, does when in the year they are born affect how often they are absent from school?

My analysis of government data suggests that secondary school pupils born in September to December have a somewhat higher absence rate than those born in May to August – which is actually the opposite of what I expected.

Absence from school is now significantly higher compared to before the covid-19 pandemic, and tackling this has been made a target of government educational policy.

Since 2022 the Department for Education (DfE) has been collecting centrally some remarkably detailed and up-to-date data on attendance records for individual pupils from many schools in England, and publishing regular summaries.

The data collated by the department makes it possible to quickly analyse a wide range of factors and potential connections with absences.

Since month of birth is definitely related to other aspects of school life, such as how well pupils do in exams and in sport – the so-called ‘relative age effect‘ – I decided to explore any link with school attendance. Through a freedom of information request I obtained pupil attendance data from the DfE for the school year 2022/23, broken down by type of school, school year and month of birth.

This table shows the percentage of school sessions missed by pupils in selected year groups. It shows that for pupils in years 1 and 2 (aged 5/6 and 6/7), it was the summer-born pupils who had higher rates of absence. This was what I expected, given the well-documented school problems often faced by summer-born children.

But for pupils in years 8 to 11 (aged 12/13 to 15/16), it was those born in September to December who were more likely to be absent.

However the differences within the year groups are not massive, so this pattern (while clear) shouldn’t be overstated. For the intervening ages the data showed very little variation within each year group, so I haven’t presented the figures here. I haven’t obtained data for the reception year.

All this data relates to pupils at about 85% of state-funded schools in England, those which take part in the DfE scheme for automatically submitting daily attendance information.

The following graph shows the same data presented in the form of a line chart.

Persistent absence is a particular problem. This is defined as when pupils are absent for over 10% of school sessions. Analysing the data on persistent absence discloses a similar pattern.

This is indicated in the table below (which involves data from primary and secondary schools, but not special schools).

Generally rates of absence increase as pupils get older and move into higher year groups. Perhaps this trend could help to explain the fact that in secondary schools it’s the older pupils within the year group who tend to be absent more often.

But this can’t be a complete explanation – for example, the frequency of persistent absence is higher for year 10 September births (32.4%) than for the older pupils born in August and in year 11 (30.7%), and similarly for various other data points.

So it looks like there may be some kind of relative age effect involved here, if probably quite mild.

Bear in mind that this is just one year’s data, the period in the wake of the pandemic could be atypical, and there is also the possibility of random variation.

As another potential factor, some illnesses have been associated with when people are born within the year. However, this would not explain the jumps in this data between August and September births.

The DfE data distinguishes authorised and unauthorised absences, but this does not help much in explaining the pattern identified here.

It’s important to note that there are other characteristics which clearly have a bigger impact on school attendance, including levels of disadvantage (poorer pupils are more likely to be absent) and ethnicity (Caribbean and White ethnic groups have higher absence rates than Indian, African and Chinese groups).

The data spreadsheet supplied to me under FOI by the Department for Education is here.

For background on the government’s impressive automated collection of real-time school attendance data, you can watch a recent talk by Caroline Kempner, the DfE’s head of data transformation, given at one of the regular Institute for Government ‘Data Bites’ events (from 37’25” in the video).

It was hearing this presentation which prompted me to do this analysis.

Absence from school and month of birth Read More »

The ICO’s tougher FOI enforcement policy 

This article was originally published on the website of Act Now Training, which provides training and consultancy on information law and governance.

Last month the Information Commissioner’s Office announced it was issuing another two Enforcement Notices against public authorities with extreme backlogs of FOI and EIR requests; the Ministry of Defence and the Environment Agency. From the published notices it is clear that both authorities had consistently failed to tackle their excessive delays, despite extensive discussions over many months with the ICO.

The ICO also issued Practice Recommendations, a lower level of sanction, to three authorities with a poor track record on FOI; Liverpool Council, Tower Hamlets Council and the Medicines and Healthcare Products Regulatory Agency. This brings the total of Enforcement Notices in the past year or so to six, and the number of Practice Recommendations to 12.

As Warren Seddon, the ICO’s Director of FOI, proclaimed in his blog on the subject, both these figures exceed the numbers previously issued by the ICO in the entire 17 years since the FOI Act came into force.

From my point of view, as a frequent requestor, this is good news. For requestors, the ICO’s current activity represents a welcome tougher stance on FOI regulation adopted by Seddon and also the Commissioner, John Edwards, since the latter took over at the start of last year.

Under the previous Commissioner Elizabeth Denham, any strategic enforcement regarding FOI and failing authorities had dwindled to nothing. The experience of requestors was that the FOI system was beset by persistent lengthy delays, both from many authorities and also at the level of ICO complaints.

The ICO’s Decision Notices would frequently comment on obstruction and incompetence from certain public bodies, as I reported when I was a BBC journalist, but without the regulator then making any serious systematic attempt to change the culture and operations of these authorities.

Under Denham the ICO had also ceased its previous policy of regularly and publicly revealing a list of authorities it was ‘monitoring’ due to their inadequate processing of FOI requests. Although this was in any case a weaker step than issuing formal enforcement notices and practice recommendations, in some cases it did have a positive effect.

Working at the BBC at the time I saw how, when the BBC was put into monitoring by the ICO, it greatly annoyed the information rights section, who brought in extra resources and made sure the BBC was released from it at the first opportunity.

On the other hand, other public authorities with long-lasting deficiencies, such as the Home Office and the Metropolitan Police, were kept in ICO monitoring repeatedly, without improving significantly and without further, more effective action being taken against them.

The ICO’s FOI team has also made important progress in the past year in rectifying its own defects in processing complaints, speeding things up and tackling its backlog. This led to a rapid rush of decision notices.

One result is that delay has been shifted further up the system, as the First-tier Tribunal has been struggling to cope with a concomitant increase in the number of decisions appealed. I understand that the proportion of decisions appealed did not change, although I don’t know if the balance between requestor appeals and authority appeals has altered.

Another consequence has been that decision notices now tend to be shorter than they used to be, especially those which support the stance of the public authority and thus require less interventionist argument from the ICO. Requestors may need to be reassured that the pressure on ICO staff for speedier decisions does not mean that finely balanced cases end up predominantly being decided on the side of the authority.

More generally I gather there is some concern within the ICO about its decisions under sections 35 and 36 of FOI, to do with policy formulation and free and frank advice, that some staff have got into a pattern of dismissing requestors’ arguments without properly considering the specific circumstances which may favour disclosure.

As part of its internal operational changes, a few months ago the ICO introduced a procedure for prioritisation amongst appeals and expediting selected ones. I have seen the evidence of this myself. A complaint I made in April was prioritised and allocated to a case worker within six weeks and then a decision notice served within another six weeks (although sadly my case was rejected). All done within three months.

On the other hand a much older appeal that I submitted to the ICO in May 2022 has extraordinarily still not even been allocated to a case worker 15 months later, from what I have been told. This is partly because it relates to the Cabinet Office, which accounts for a large proportion of the ICO’s oldest casework and has been allowed a longer period of time to work through old cases.

It is interesting to note that the ICO does not proactively tell complainants that their case has been prioritised, even when they have specifically argued it should be at the time of submitting their complaint.

The ICO wants to avoid its staff getting sucked in to disputes about which appeals merit prioritisation. If you want to know whether your case has been prioritised, you have to ask explicitly, and then you will be told.

The ICO has not yet officially released any statistics about the impact of its new prioritisation policy. However I understand that in the first three months about 60 cases were prioritised and allocated to a case officer to investigate within a month or so. This is a smaller number than might have been expected.

Around 80 percent of these were prioritised in line with the criterion for the importance of the public interest involved in the issue. And about 60 percent of decisions to prioritise reflected the fact that the requestor was in a good position to disseminate further any information received, possibly as a journalist or campaigner.

In most of the early decision notices for prioritised complaints the ICO has backed the authority and ruled against disclosure. So if you are a requestor, the fact that the ICO has decided to prioritise your appeal does certainly not mean that it has reached a preliminary decision that you are right.

The ICO’s tougher FOI enforcement policy  Read More »

BBC bosses, my part in their downfall – part 3

Some headlines are meant to shock you, but they don’t always have the desired effect.

On the publication day in September 2002 for the Blair government’s key dossier on ‘Iraq’s Weapons of Mass Destruction’ (as it was titled), I saw this on the front page of the London Evening Standard: ’45 MINUTES FROM ATTACK’.

And I remember thinking to myself, well, launching those weapons is actually rather slower than I expected.

At the time I clearly know nothing about WMD systems. But the government’s ‘intelligence’ on the matter, which should have been rather more accurate than me, also turned out to be hopelessly misinformed.

The so-called ‘45 minute claim’ was at the heart of the David Kelly controversy and the Hutton Inquiry. It came to symbolise the extent to which the published dossier had, or had not been, ‘sexed up’.

And in one of his broadcasts, the 6.07 two-way, Andrew Gilligan had said he’d been told by a senior official that it was included although ‘the government probably knew [it] was wrong’.

Broadly speaking, the claim was along the lines that Iraq could deploy some chemical or biological weapons within 45 minutes of a decision to do so. But in the Hutton Inquiry and surrounding discussion it was always referred to by the shorthand term of ‘the 45 minute claim’. And in fact it is hard to state exactly what the claim was, not least because it was phrased in four different ways in the dossier. Which might have been a clue to how bogus it was.

The dossier would have been more accurate if it had stated: ‘Some guy in Iraq apparently says that maybe Iraqi military units can fire a chemical weapon a few hundred yards or so on a battlefield within 45 minutes of an order, we don’t know whether he knows what he’s talking about’. But then that probably would not have produced the headlines convenient for the government, such as the Sun proclaiming ‘BRITS 45mins FROM DOOM’.

+ + +

So in retrospect how accurate and justified was the BBC’s reporting of David Kelly’s unease about the WMD dossier?   

My view in summary:

1. The reporting by Andrew Gilligan and the BBC generally contained a substantial element of truth, about the process of compiling the WMD dossier leading to overstatement of what appeared to be the evidence.

2. But in parts the reporting was confused and inaccurate.

3. The key exaggerated statement which the BBC headlined repeatedly (and was much more significant than anything Andrew half-mumbled at 6.07am that went unnoticed for weeks) was that the BBC had been told that Downing Street included the 45 minute claim in the dossier against the wishes of the intelligence services.

4. The accurate alternative to this that we should have attributed to the source was: ‘The 45 minute claim was included in the dossier against the wishes of some experts from defence intelligence, who thought Downing St was responsible for doing so’. This would have been true, important and definitely worth reporting.

This is my personal opinion, arising from the months I spent working through and analysing every angle on the whole story to help inform the BBC management’s case. It was not a position adopted by the BBC.

It is also simply an overview, leaving aside all sorts of detailed arguments about the contents of the government dossier and the process for compiling it, the phraseology of individual reports, some less important errors made by some BBC programmes, etc. But this piece is easily long enough anyway without me going into more of that.

+ + +

So does that mean the Blair government ‘lied’?

There’s an occasional but continuing ritual, an exchange of allegation and denial over this, in which Blair, Campbell and others are accused of lying, and they respond in a rather pained manner to insist that it’s fine, of course, to condemn the war, but please don’t say they acted in bad faith, as they really did think Iraq had the WMD.

In my view this starkly dichotomous dispute completely misses the main point. I’m sure Tony Blair genuinely believed that the Iraqi dictator Saddam Hussein had access to WMD, and also that he was genuinely very worried about it. Most people thought Iraq was likely to have an active WMD programme. Saddam had deployed such weapons in the past, and continued to act much as if he still had use of them – not unlike a household which keeps prominently displaying a ‘beware of the dog’ sign, because it’s useful to intimidate the neighbours, even after the dog is no longer around.

In holding these views Blair was a victim of groupthink and a failure to question assumptions, but in terms of how he presented things to the public, his real fault was the unjustified way in which he exaggerated the strength of the intelligence.

He told the House of Commons that the intelligence picture was ‘extensive, detailed and authoritative’. In reality, it was very limited, patchy, superficial and poorly sourced. Given this reality, it’s not so surprising that it also turned out to be completely wrong.

+ + +

There were about 20 people on the BBC side who were given access to the Hutton Report 24 hours before publication. I was one of them.

We were divided into small groups in rooms in Broadcasting House, with numbered copies. Given it contained 740 A4 pages it might have seemed a bit daunting to go through it, but over 90% of it consisted simply of lengthy extracts copied-and-pasted from evidence and documents, and one of the BBC lawyers directed us immediately to the few important pages.

We’d prepared all sorts of lines to take to defend the BBC, but it rapidly became apparent that they weren’t really up to the scale of criticism the corporation received from Lord Hutton.

From the government point of view the Hutton Report was so good, it was bad. Hutton’s verdict was so one-sided that it lacked authority.

From the BBC point of view I vaguely hoped that the report was so bad, it might somehow be good. But that didn’t transpire. It was just unredeemably bad.

In my view it was also a shoddy, poorly argued, inadequate exercise. There were valid criticisms to be made of the BBC, as I’ve indicated here, but Hutton didn’t grasp the issues properly and made completely the wrong ones.

The Hutton Report was full of flaws. There’s no point now in going through them all. Many have been thoroughly detailed, for example, in Greg Dyke’s memoirs and in the book by Kevin Marsh, who edited the Today programme at the time of Andrew Gilligan’s broadcast.

To take just one, which struck me immediately as blatant, Hutton said that the allegation the dossier was ‘sexed up’ was ‘unfounded’, on the grounds that the audience would take this to mean that it was ’embellished with false or unreliable items’. He had no basis for this only choosing this interpretation. He was acting like a judge who has to decide on a specific meaning for a word in legislation. In fact, ‘sexed up’ is a very ambiguous term which certainly can cover much weaker assertions. For that reason it’s probably not a phrase conducive to clarity of analysis, but it’s entirely defensible as accurate in this context.

Hutton also made ill-founded criticisms of the BBC’s editorial structures and processes, which he clearly didn’t understand. To be honest these can seem obscure at times, even to those of us working there. But Lord Hutton was the one person who’s ever been given the opportunity to call witnesses and ask them questions until he had worked it out, which he failed to do.

+ + +

That might seem like a BBC perspective on matters, but a few years after the Hutton Report came out, I sat down at dinner opposite a leading member of the government’s legal team at the inquiry. And his view wasn’t really all that different from mine.

He told me that the Hutton Report was an ‘intellectually weak’ piece of work, an assessment with which I entirely agree. He also said the following:

  • Hutton was a ‘second-rate judge’ and ‘not up to the same standard as other Law Lords’
  • His report ‘summarises both sides and says “I come down on this side” without reason or argument’
  • He was ‘the kind of judge who sees everything in black-and-white, either all for you or all against you’
  • He ‘ignored criticisms of the government, but his report would have carried more credibility if he had included them’
  • Alastair Campbell ‘went over the top and should have been criticised too’

As a verdict on the Hutton Report, I think all this is true.

+ + +

POSTSCRIPT

At that time one of the boxes you had to fill out on your annual appraisal form in the BBC was to state if there were important things that could have gone better with your work during the year. So on my form I jokingly put ‘Yes, avoiding the resignation of the Director General and the Chair of the Governors’.

I have to confess that we journalists did not always take to HR initiatives like the ever-changing appraisal system with the level of seriousness and commitment that the HR people felt they merited.

A year or so later I bumped into Greg Dyke in the crowd at a Brentford game at Griffin Park. He seemed in a cheerful mood (maybe Brentford had won, I don’t remember) and we chatted amiably. However I then told him what I’d written on my appraisal, and despite his surface good humour, I don’t think he found it at all funny. I regretted it. So if you’re reading this, Greg, I apologise.

BBC bosses, my part in their downfall – part 3 Read More »