Data analysis

The VAT cliff edge: How the threshold impedes small businesses

As Rachel Reeves ponders her forthcoming budget and how to balance raising money against economic growth, one of her self-imposed constraints is her pledge not to raise the rate of VAT. However the impact of taxes also depends greatly on the thresholds from which they apply, even though this tends to get a lot less attention in public debate (as is certainly the case for income tax).

So what about the annual turnover level at which businesses have to register for VAT?

Data I have recently obtained from HMRC under the freedom of information law shows the dramatic impact of the VAT threshold in restricting the growth of some of the UK’s small businesses.

In 2021/22 the UK had 21,752 businesses with annual turnover in the range £84,000-£85,000, just below the then threshold. But there were only 10,096 businesses just over the limit, in the range £85,000-£86,000.

In other words the number of businesses clustered just under the VAT threshold was more than double the number just above, as businesses curtail their activities to remain outside the VAT registration system.

The graph above clearly shows the cliff edge in the data.

Many small businesses are desperate to keep their annual turnover under the VAT level, so that they avoid the bureaucracy and costs of registration and they don’t have to charge VAT to customers, which would make them less competitive. However the consequence is that they then won’t grow further into larger, more successful operations.

For some businesses the VAT threshold functions as a ceiling constraining their growth.

Research by Warwick University in 2022 concluded that earlier data of this kind reflected genuine curtailment of business activity rather than false reporting to HMRC.

This is the latest data available from HMRC, which says that more recent information is still being processed. The current VAT threshold is now £90,000, as the figure was increased by the Conservative government before the general election.

The UK’s VAT threshold is high compared to other European countries which tend to impose VAT registration on businesses at a much lower level. While the UK policy saves many small businesspeople from the compliance burden of VAT, the significantly lower thresholds elsewhere make it less likely that enterprises will be found bunched and held back just under the relevant level of turnover.

I also wanted to get a breakdown of the data by sector of the economy, to see which kinds of businesses were most affected. HMRC said it could provide this for 2019/20, as it had previously extracted the information involved, but that more recent breakdowns would probably exceed the FOI cost limit.

According to these 2019/20 figures, the most dramatic effect is in the construction sector.

This data shows 4,445 construction businesses with an annual turnover of £84,000-£85,000, but only 1,425 in the range £85,000-£86,000. So the number of construction businesses appearing to have kept themselves just below the limit is over three times the number who grew a little more and just exceeded it.

The chart shows the impact for construction and some other economic sectors with large numbers of small enterprises.

These FOI releases from HMRC constitute the latest and most thorough official evidence of what the tax expert Dan Neidle of Tax Policy Associates has called ‘the VAT growth brake’.

The full HMRC spreadsheets can be downloaded here:

1) Summary data for 2019/20, 2020/21, 2021/22

2) 2019/20 sectoral breakdown

The VAT cliff edge: How the threshold impedes small businesses Read More »

FOI: Which complaints are upheld by the ICO?

Freedom of information requests can be rejected for a range of reasons, but some are much more likely to be overturned by the Information Commissioner’s Office than others.

The details of this are made clear by my analysis of a dataset recently released by the ICO covering nearly 22,000 decisions issued by the information rights regulator since FOI came into force.

For example, the ICO has upheld nearly half the complaints received from information requesters against FOI refusals linked to protecting commercial interests. But it has upheld only one in six objections to refusals based on international relations.

This table shows, for each of the legal grounds for dismissing FOI requests, the number of complaints about that reason which the ICO has ruled on and the percentage which it has upheld (ie backing the requester and overriding the public authority).

Subject matter
(section of FOI Act)
Number of
complaints
Percentage
upheld
The economy (29)2756
Relations within UK (28)1753
Commercial interests (43)101047
Future publication or research (22/22A)21344
Health and safety (38)11942
Policy formation (35)62238
Already accessible (21)33236
Effective conduct of public affairs (36)96735
Audits (33)3834
Confidential information (41)60534
Law enforcement (31)86030
Vexatious or repeated (14)149823
Investigations (30)31821
Personal data (40)309718
Monarchy and honours (37)18118
Defence (26)4117
National security (24)29917
International relations (27)29216
Legal privilege (42)50716
Otherwise prohibited (44)40614
Cost (12)149112
Court records (32)1088
Security bodies (23)3047
Parliamentary privilege (34)120
Source: Martin Rosenbaum, based on ICO data

Or in chart form:

So during FOI’s two decades of operation, the ICO has been much happier to overrule public authorities on matters like commercial interests and policy formation than on topics like defence, security and international affairs.

My analysis uses three spreadsheets with details of ICO rulings which were recently disclosed via the What Do They Know website, in response to a request from Alison Benson. The spreadsheets list the ICO’s formal decision notices from the first one in 2005 until last month.

The ICO maintains that it provided this material voluntarily ‘on a discretionary basis’, arguing that the information would be already available through its routine publication of decision notices.

However the supply of these three files makes the statistical analysis of ICO rulings much more practical than by trying to process all the individually published decisions. The ICO’s release of this dataset is therefore a positive and welcome step in terms of its own transparency.

Environmental information

Note that my analysis excludes environmental information, which falls under a different law, the Environmental Information Regulations. The EIR exceptions do not exactly correspond to the FOI exemptions, so the data cannot be combined.

The numbers of EIR cases are fewer than for FOI, but a similar pattern emerges. Thus the ICO has more frequently overruled public authorities when they base an EIR refusal on commercial confidentiality or the internal nature of communications, rather than when authorities rely say on protecting the course of justice.

Delay

It is also possible to analyse aspects of the dataset in more detailed ways. Here is one example.

This table shows the 15 public authorities against whom the ICO has most often upheld complaints about delay in processing FOI requests (under section 10 of the FOI Act), and how many times this has happened since 2005.

Public authorityUpheld complaints
about FOI delay
Home Office303
Ministry of Justice173
NHS England162
Cabinet Office161
Dept of Health and Social Care84
Metropolitan Police82
Dept for Work and Pensions79
Foreign Office74
Sussex Police74
BBC60
Ministry of Defence58
Dept for Education54
Wirral Council43
Croydon Council39
Information Commissioner’s Office35
Source: Martin Rosenbaum, based on ICO data

On this measure the public authorities with the biggest record of delay since FOI was implemented are the Home Office, the Ministry of Justice, NHS England and the Cabinet Office.

Ironically the authority which comes 15th on this list of shame is the ICO itself! This is clearly a very bad record for an organisation which should be setting a good example of prompt compliance with the law, but at least as a regulator it has been willing to point out its own failings.

Notes: 1) My analysis amalgamates bodies which at some point since 2005 had some change of name or scope but remained essentially the same organisation (eg NHS England with NHS Commissioning Board; Department of Health and Social Care with Department of Health). 2) The ICO is thoroughly and annoyingly inconsistent when naming authorities (eg sometimes using ‘Metropolitan Police Service’ and sometimes using ‘Commissioner of the Metropolitan Police Service’. I hope I have spotted all such instances and combined the figures accordingly, but it is possible I have missed some.

FOI: Which complaints are upheld by the ICO? Read More »

Election prediction models: how they fared

Which predictive model for the results of the election was best – or the least bad?

I say ‘least bad’, because in what may seem like the frequent tradition of the British polling industry, they all overstated how well Labour would do.

However there was also a huge gap between the least bad and the much worse. In a close election discrepancies of this extent would have pointed during the campaign to very different political situations, creating the impression that the forecasting models were contradictory chaos. This level of variation is somewhat disguised by the universal prediction of what could be called a ‘Labour landslide’, now confirmed as fact (even if it isn’t as big as they all said it was going to be).

Labour seats

Let’s look at the forecasts for the total number of Labour seats. This determines the size of Labour’s majority and is the most politically significant single measure of how the electorate voted.

Actual result for Labour seats412
Britain Predicts418
More In Common430
YouGov431
Election Maps432
Economist*433
JL Partners442
Focal Data444
Financial Times447
Electoral Calculus453
Ipsos453
We Think465
Survation**470
Savanta516

I have listed the models which predicted votes for each constituency in Great Britain and were included in the excellent aggregation site produced by Peter Inglesby. (If that means any model is missing which should have been added, my apologies.)

Note that what I am comparing here are the statistical models which aimed to forecast the voting pattern in each seat, not normal opinion polls which only provide national figures for vote share. These competing models are all based on different methodologies, the full details of which are not made public.

The large number of such models was a new feature of this election, linked to the growing adoption of MRP polling along with developments in the techniques and capacity of data science.

On this basis the winner would be the Britain Predicts model devised by Ben Walker and the New Statesman. Well done to them.

This model is not based on a single poll itself, but takes published polling data and mixes it into its analysis. This is also true of some of the others around the middle of the table, such as the Economist and the Financial Times.

On the other hand polling companies like YouGov and Survation base their constituency-level forecasts on their own MRP polls (Multilevel Regression and Post-stratification), combining large samples and statistical modelling to produce forecasts for each seat.

The closest MRP here is the More in Common one, with YouGov narrowly next. However the bottom of the table are also MRP polls rather than mixed models – We Think, Survation and Savanta. (It should be noted that the Savanta one was conducted in the middle of the campaign and so was more vulnerable to late swing).

Constituency predictions

However a different winner emerges from a more detailed examination of the constituency level results. This is based on my analysis using the data aggregated on Peter Inglesby’s website.

Although Britain Predicts was closest for the overall picture, it got 80 individual seats wrong in terms of the winning party. This was often in opposite directions, so at the net level they cancelled each other out. It predicted Labour would win 33 seats that they lost, while also predicting they would lose 26 seats which the party actually won.

In contrast YouGov got the fewest seats with the wrong party winning, just 58. So well done to them. And I’m actually being a bit harsh to YouGov here, as this is counting the 10 seats they predicted as a ‘tie’ as all wrong – on the basis that (a) the outcome wasn’t a tie (haha), and (b) companies shouldn’t get ranked with a better performance via ambiguous forecasts which their competitors avoid. If you do not agree with that, which might be the more measured approach, you can score them at 53.

The two models that did next best at the constituency level were Elections Maps (62 wrong) and the Economist (76 wrong). The worst-scoring models were We Think and Savanta which both got 134 seats wrong.

This table shows the number of constituencies where the model wrongly predicted the winning party.

ModelErrors at seat level
YouGov53
Election Maps62
Economist76
Britain Predicts80
Focal Data80
More in Common83
JL Partners91
Electoral Calculus93
Financial Times93
Ipsos93
Survation100
Savanta134
We Think 134
Source: Analysis by Martin Rosenbaum, using data from Peter Inglesby’s aggregation site.

(I’m here adopting the slightly kinder option for YouGov in the table).

This constituency-level analysis also sheds light on the nature of the forecasting mistakes.

There were some common issues. Generally the models failed to predict the success of the independent candidates who appealed largely to Muslim voters and either won or significantly affected the result. On the one hand it is difficult for nationally structured models to pick up on anomalous constituencies. On the other it is possible that the models typically do not give enough weight to religion (as opposed to ethnicity).

On this point there’s increasing evidence of growing differences in voting patterns between Muslim and Hindu communities. It’s striking that 12 of the 13 models (all except YouGov) wrongly forecast that the Tories would lose Harrow East, a seat with a large Hindu population where the party bucked the trend and actually increased its majority.

The models also failed almost universally to predict quite how badly the SNP would do – ironically with the exception of Savanta, the least accurate model overall.

On the other hand there were also wide variations between the models in terms of where they made mistakes. In all there were 245 seats – 39% of the total – where at least one model forecast the wrong winning party.

The seats that most confused the modellers are as follows.

Seats where all the 13 modellers predicted the wrong winning party: Birmingham Perry Barr, Blackburn, Chingford and Woodford Green, Dewsbury and Batley, Fylde, Harwich and North Essex, Keighley and Ilkley, Leicester East, Leicester South, Staffordshire Moorlands, Stockton West, plus the final seat to declare: Inverness, Skye and West Ross-shire***.

Seats where 12 of the 13 modellers predicted the wrong winning party: Beverley and Holderness, Godalming and Ash, Harrow East, Isle of Wight East, Mid Bedfordshire, North East Hampshire, South Basildon and East Thurrock, The Wrekin.

Overall seats v individual constituency forecasts

So which is more important – to get closest to the overall national picture, or to get most individual seats right?

The statistical modelling processes involved are inherently probabilistic, and it’s assumed they will make some errors on individual seats that will cancel each other out. That’s the case for saying Britain Predicts is the winner.

But if you want confidence that the modelling process is working comparatively accurately, that would point towards getting the most individual seats right – and YouGov.

Note that this analysis is based just on the identity of the winning party in each seat. Comparing the actual against forecast vote shares in each constituency could give a different picture. I haven’t had the time to do that more detailed work yet.

Traditional polling v predictive models

The traditional (non-MRP) polls also substantially overstated the Labour vote share, as the MRP ones did, raising further awkward questions for the polling industry. However, there’s an interesting difference between the potential impact of the traditional polls compared to the predictive models which proliferated at this election.

Without these models, the normal general assumption for translating vote shares into seats would have been uniform national swing. (This would have been in line with the historical norm that turned out to be inapplicable to this election, where Labour and the LibDems benefitted greatly from differential swing patterns across the country.) And seat forecasts reliant on that old standard assumption would then have involved nothing like the massive Labour majorities suggested by the models.

Although the predictive modelling in 2024 universally overstated Labour’s position, it did locate us in broadly the correct political terrain – ‘Labour landslide’. We wouldn’t have been expecting that kind of outcome if we’d only had the traditional polling (even with the way it exaggerated the Labour share).

To that extent the result was some kind of vindication for predictive modelling and its seat-based approach in general, despite the substantial errors. The MRP polls and the models that reflected them succeeded in detecting some crucial differential swings in social/geographic/political segments of the population (while also exaggerating their implications).

However, it’s also possible that the models/polls could in a way have been self-negating predictions. By forecasting such a large Labour victory and huge disaster for the Tories, they could have depressed turnout amongst less committed Labour supporters who then decided not to bother going to the polling station, and/or they could have nudged people over into voting LibDem, Green or independent (or indeed Reform) who were until the end of the campaign intending to back Labour.

Notes

*Note on Economist prediction: Their website gives 427 as a median prediction for Labour seats, but their median predictions for all parties sum up to well short of the total number of GB seats. In my view that would not make a fair comparison. Instead I have used the figure in Peter Inglesby’s summary table, which I assume derives from adding up the individual constituency predictions.

**UPDATE 1: Note on Survation prediction: After initially publishing this piece I was informed that Survation released a very late update to their forecast which cut their prediction for Labour seats from 484 to 470. The initial version of my table used the 484 figure, which I have now replaced with 470. However, despite reducing the extent of their error, this does not affect their position in the table as second last.

Other notes: (1) I haven’t been able to personally check the accuracy of Peter Inglesby’s data, for reasons of time, but I have no reason to doubt it. I should add that I am very grateful to him for his work in bringing all the modelling forecasts together in one place. (2) This article doesn’t take account of the outcome in Inverness, Skye and West Ross-shire, which at the time of writing was yet to declare.

***UPDATE 2: The eventual LibDem victory in Inverness, Skye and West Ross-shire was not predicted by any model, which all forecast the SNP would win. This means that this has to be added to my initial list of those which all the models got wrong, which therefore now totals 12 constituencies.

Election prediction models: how they fared Read More »

Absent on Fridays

Pupils are over 20 per cent more likely to be absent from school on Fridays compared to Wednesdays.

The average rate of absence last term in England’s state-funded schools was 7.5% on Fridays. This compares to 6.7% on Mondays, the next most common day for school absence, and the lower figures for the middle of the week: 6.3% for Tuesdays, 6.2% for Wednesdays and 6.4% for Thursdays.

I have derived these figures by analysing the detailed school attendance data collected and published by the Department for Education.

The issue of school attendance is moving up the political agenda, as levels of absence are now much higher than before the covid pandemic.

The government has today announced what it calls ‘a major national drive to improve school attendance’, with measures targeted at tackling persistent absence. The Labour party is also focusing on the issue this week.

This weekly pattern of absence being highest on Fridays, and second-highest on Mondays, with better attendance mid-week, is a widespread feature of the current school system.

From my analysis of the DfE’s data, it applies in both primary and secondary schools, and also in all regions of England.

It is seen when looking both at authorised and unauthorised absences from school. This includes applying to absence due to illness, which is the most common reason recorded for pupils not attending school.

It was also evident throughout the autumn term, as can be seen in this chart (with a particular peak on the Friday before half-term).

The DfE’s data on school attendance can be downloaded here.

In a previous post I examined how school attendance can be affected by when in the year pupils are born.

Absent on Fridays Read More »

Absence from school and month of birth

For school pupils, does when in the year they are born affect how often they are absent from school?

My analysis of government data suggests that secondary school pupils born in September to December have a somewhat higher absence rate than those born in May to August – which is actually the opposite of what I expected.

Absence from school is now significantly higher compared to before the covid-19 pandemic, and tackling this has been made a target of government educational policy.

Since 2022 the Department for Education (DfE) has been collecting centrally some remarkably detailed and up-to-date data on attendance records for individual pupils from many schools in England, and publishing regular summaries.

The data collated by the department makes it possible to quickly analyse a wide range of factors and potential connections with absences.

Since month of birth is definitely related to other aspects of school life, such as how well pupils do in exams and in sport – the so-called ‘relative age effect‘ – I decided to explore any link with school attendance. Through a freedom of information request I obtained pupil attendance data from the DfE for the school year 2022/23, broken down by type of school, school year and month of birth.

This table shows the percentage of school sessions missed by pupils in selected year groups. It shows that for pupils in years 1 and 2 (aged 5/6 and 6/7), it was the summer-born pupils who had higher rates of absence. This was what I expected, given the well-documented school problems often faced by summer-born children.

But for pupils in years 8 to 11 (aged 12/13 to 15/16), it was those born in September to December who were more likely to be absent.

However the differences within the year groups are not massive, so this pattern (while clear) shouldn’t be overstated. For the intervening ages the data showed very little variation within each year group, so I haven’t presented the figures here. I haven’t obtained data for the reception year.

All this data relates to pupils at about 85% of state-funded schools in England, those which take part in the DfE scheme for automatically submitting daily attendance information.

The following graph shows the same data presented in the form of a line chart.

Persistent absence is a particular problem. This is defined as when pupils are absent for over 10% of school sessions. Analysing the data on persistent absence discloses a similar pattern.

This is indicated in the table below (which involves data from primary and secondary schools, but not special schools).

Generally rates of absence increase as pupils get older and move into higher year groups. Perhaps this trend could help to explain the fact that in secondary schools it’s the older pupils within the year group who tend to be absent more often.

But this can’t be a complete explanation – for example, the frequency of persistent absence is higher for year 10 September births (32.4%) than for the older pupils born in August and in year 11 (30.7%), and similarly for various other data points.

So it looks like there may be some kind of relative age effect involved here, if probably quite mild.

Bear in mind that this is just one year’s data, the period in the wake of the pandemic could be atypical, and there is also the possibility of random variation.

As another potential factor, some illnesses have been associated with when people are born within the year. However, this would not explain the jumps in this data between August and September births.

The DfE data distinguishes authorised and unauthorised absences, but this does not help much in explaining the pattern identified here.

It’s important to note that there are other characteristics which clearly have a bigger impact on school attendance, including levels of disadvantage (poorer pupils are more likely to be absent) and ethnicity (Caribbean and White ethnic groups have higher absence rates than Indian, African and Chinese groups).

The data spreadsheet supplied to me under FOI by the Department for Education is here.

For background on the government’s impressive automated collection of real-time school attendance data, you can watch a recent talk by Caroline Kempner, the DfE’s head of data transformation, given at one of the regular Institute for Government ‘Data Bites’ events (from 37’25” in the video).

It was hearing this presentation which prompted me to do this analysis.

Absence from school and month of birth Read More »

A&E: when are waits shortest?

Would you like to know what times of the week have the shortest or longest waits in your local A&E department?

I’ve obtained a spreadsheet from NHS Digital (via a freedom of information request) which reveals just that.

The spreadsheet gives data separately for each provider of urgent and emergency care in England for 2021/22. For patient arrivals in each hour of the week, it shows the average duration of attendance there until discharge or admission – ie, until leaving the hospital or being admitted as an inpatient.

The overall A&E pattern is very much that there are longer waits in the late evening and overnight, shorter waits in the morning, with the afternoons/early evenings in the middle.

Source: Analysis by Martin Rosenbaum from NHS Digital data

In this chart each row going across is a different provider of emergency/urgent care in England (I have excluded those with only partial data or which are not 24-hour services), and each column is an hour of the week, going from 0000-0059 on Monday to 2300-2359 on Sunday. The red cells show longer average waiting times, the green cells shorter waits, and the yellow ones intermediate times.

It makes clear that for almost all providers, patients who arrive just before midnight or in the hours afterwards experience the longest waits on average, while those who arrive in the morning have the shortest waits.

This pattern is the same on every day of the week, including weekends. The very longest waits of all tend to be overnight from Monday to Tuesday.

The exceptions to this are mainly urgent treatment centres rather than A&E departments – their busiest times are often late afternoon or early evening. They appear congregated towards the top of this chart due to the ordering of the NHS provider code system.

Some providers show much greater variation in waiting times across different points of the week than others do.

Overall national statistics about busy times of the day in A&E are published routinely, but as far as I am aware this dataset broken down by different local providers and hour of the week has not been released before.

In a period when there is increasing concern over waiting times for emergency and urgent care, it is important and valuable localised information.

A&E: when are waits shortest? Read More »

Scotland’s alphabet effect

Last week’s local election results appear to confirm how a candidate’s chance of getting elected to Scotland’s councils is dramatically influenced by a factor which is nothing to do with their abilities – alphabetical order of surnames.

This arises from the voting system used for Scottish council elections, the Single Transferable Vote (STV), where voters number candidates in their order of preference.

Parties will stand more than one candidate in a multi-member ward if they think they have a chance of getting more than one elected.

But of course lots of voters, who may have strong preferences between the parties, don’t particularly care about preferring one candidate from within a party to another.

It’s well established that under STV many voters have a tendency to number candidates from the same party just in the order they find them on the ballot paper, which is a major advantage for those listed first. In Scotland that is alphabetical order by surname.

To illustrate the striking extent of this I have looked at what happened last week in two Scottish councils, Aberdeen and West Lothian (the first and last councils alphabetically, in a limited attempt to avoid alphabetical bias in my selection).

I examined all the cases in these two councils where a party stood two or more candidates in one ward.

In West Lothian, there were 14 examples. In 13 of these, the candidate who came first alphabetically from that party got more first preference votes than the candidate listed second alphabetically, sometimes by huge margins.

The candidates listed first alphabetically for a party averaged 1,669 first preference votes; the candidates from the same party listed second alphabetically only averaged 745 first preferences – less than half as much.

The result was that the candidates listed first alphabetically for a party had a 100% success rate at getting elected; the candidates from the same party listed second alphabetically only had a 64% success rate of election.

In Aberdeen, there were 16 examples. In 14 of these, the candidate who came first alphabetically from that party got more first preference votes than the candidate listed second alphabetically, again sometimes by huge margins.

The candidates listed first alphabetically for a party averaged 1,223 first preference votes; the candidates from the same party listed second alphabetically only averaged 554 first preferences – again, less than half as much.

The result here was that the candidates listed first alphabetically for a party had an 88% success rate at getting elected; the candidates from the same party listed second alphabetically only had a 56% success rate of election.

Obviously it would be ideal to do this analysis for all the 32 local authorities in Scotland. But given the different locations and formats in which all the results are published, that would be a very laborious exercise which is too time-consuming for me to do right now. If there was one single national database of all Scottish local election results in a convenient format for exporting data then it would be a lot more feasible! (I also haven’t examined the impact in the very different political circumstances of Northern Ireland).

It seems clear that the current position in Scotland represents a form of institutionalised systemic discrimination. A council seat is often a step towards building a powerful political career on a bigger stage.

In the past the Scottish government has considered various means of ameliorating this situation but has not implemented any change. Potential options would include randomising the ballot paper order or listing candidates in reverse alphabetical order on half the ballot papers.

Parties could counteract the effect if they had loyal, disciplined voters who would order candidates as instructed, with different instructions issued to different subsets of voters. Roughly equalising the number of first preferences would help to get more than one of their candidates elected.

There has been some evidence of alphabetical voting affecting results in English and Welsh elections, but this is to a much lesser extent because of the different voting systems. Alphabetical voting is also an international phenomenon.

And alphabetical bias also exists in other contexts – here’s an interesting paper on its impact in an academic discipline where co-authors of papers were listed alphabetically.

By the way, when drafting this piece I noticed I had automatically defaulted to providing the Aberdeen data before that for West Lothian, so I went back and reversed that. But I did leave Aberdeen first in the chart.

The acceptance of alphabetical order as an apparently natural and unproblematic method may have a deeper and more insidious grip on our minds, and more important consequences, than we may consciously realise.

Scotland’s alphabet effect Read More »

From Morgan to Frankie

The most popular gender-neutral first names given to babies in England and Wales in 2020 were Frankie, River and Harley.

Looking back at a longer period, the most common gender-neutral first names over the past 25 years were Morgan, Charlie and Taylor.

This is according to my analysis of the baby name datasets for England and Wales issued by the Office for National Statistics, who released their figures for 2020 a few days ago.

The ONS compiles separate datasets for the names of boys and girls. Their annual lists of most popular boys’ and girls’ names are always widely reported. I decided to examine something they don’t analyse – the frequency of gender-neutral or unisex names.

In 2020 there were just 10 first names given at birth to both over 100 girls and over 100 boys. They are listed in this table:

They are ordered according to how often they were used for whichever sex they were less popular for. This measure is mine. As it reflects the frequency of the names in both cases, it seems to me to capture gender-neutrality or ‘unisexness’ better than any other criterion I came up with, although other approaches are possible.

Here is a comparable table compiled on the same basis for the past 25 years in total (the published ONS data goes back to 1996), featuring the 12 first names given at birth both to over 2,000 girls and over 2,000 boys:

So Morgan is the leading unisex first name over this time range, the only name to have been given to over 9,000 girls and also over 9,000 boys in the 25-year period from 1996 to 2020. However it has declined considerably in popularity in recent years, as have some other names in this table.

It’s often said that there has been a long-term phenomenon of unisex names becoming ‘feminised’. Some traditional boys’ names start to become popular for girls too, and then parents apparently no longer want to give them to boys (classic examples include Evelyn and Shirley).

However there seems to be little evidence of such a trend in the ONS data over the past 25 years.

As one way to get an overall impression of this, each line on this chart below represents one of the 50 most popular gender-neutral names, and each column is a year, going chronologically from 1996 on the left to 2020 on the right. For each name, cells are coloured more in red for years when they were more popular for girls and more in blue when more popular for boys. (The colour-coding may be stereotypical, but it does make the chart more intuitive to grasp easily).

As time advances, the names move more from the redder/pinker areas to bluer ones than in the opposite way (although by no means uniformly).

That suggests these gender neutral names are not becoming feminised; if anything they appeared to get a bit more popular for boys (ie bluer) and less popular for girls.

However looking at the data in more detail it seems that what is happening is mainly a trend amongst girls: in particular it’s becoming less common to give girls names like Charlie and Jamie, which are largely boys’ names but which 15 to 25 years ago were also used for a fair number of girls.

What this does mean is that unisex names now are more likely to be broadly similar in popularity for both girls and boys, rather than include various predominantly boys’ names which are also given to some girls.

Finally, it’s important to note that generally these unisex or gender-neutral names aren’t very popular at all. So from my list of top 10 unisex names in 2020, Frankie, the highest for boys, is only 61st in popularity for boys’ names overall that year; and Eden, the highest for girls, only just squeezes into the top 100 girls’ names at 98th.

Parents do seem to prefer to give their children names which are clearly recognisable as belonging to either a girl or a boy.

Note: The ONS data (and therefore this analysis) is based on the specific spellings of names on birth certificates and does not take account of similar names. In other words, Charlie and Charley, for example, are treated as entirely different names.


From Morgan to Frankie Read More »