Spotlight on multi-academy trusts: what is happening to performance?

Karen Wespieser

Thursday 25 January 2018

Over the course of the past week, we have been sharing what the research tells us so far about Multi-Academy Trusts (MATs). We started by looking at pupil outcomes, and then moved on to teachers and schools. Today though, is a big day in terms of moving forward the evidence base on MATs’ performance. At 0930 hours this morning, the Department for Education (DfE) released ‘Multi-academy trust performance measures: 2016 to 2017’, the department’s own statistics on the performance of state-funded schools in multi-academy trusts in England. So what does this add to our story?

It is only the third time that these statistics have been published, and last year (reporting 2016 data) the sample was smaller as DfE only included MATs with three or more academies that had been with the MAT for at least three full academic years within scope of their analysis. Their rationale was that it can take time for a MAT to fully influence the outcomes of its schools, particularly those starting from a relatively low base in terms of educational performance. Whilst this was probably a fair and robust way to do things, it only resulted in 95 MATs with Key Stage 2 measures and 47 MATs with Key Stage 4 measures being included in DfE’s 2016 results.

A headline figure from this year’s data is that the number of MATs included in DfE’s analysis has increased considerably. There are now 153 MATs with Key Stage 2 measures and 62 MATs with Key Stage 4 measures in the data. However, whilst this is a bigger number of trusts than last year, we need to remember that it is only a small proportion of all MATs – there are over 3,000 registered in England, many comprising only two schools, which are outside the scope of DfE’s report.

The growth in the number of MATs included in the DfE figures has a perverse effect on reporting. These changes in the numbers of MATs in scope mean that a comparison over time is tricky as we cannot be sure whether any change in performance is due to genuine improvement or decline, or to the different mix of MATs in scope between the two years. Indeed, DfE repeatedly caveat that ‘a conclusion about MAT performance over time cannot be drawn’ throughout their report. They do, however, optimistically note that: ‘changes over time may become more comparable as MATs stabilise in size and mix in future’.

We had also been anticipating that DfE would be producing and publishing new MAT improvement measures in their statistical first release (SFR) this year, having said in last year’s SFR that they would do so when they have two years’ of comparable data, which they have now. However, DfE are now saying that the two years of comparable data are not sufficient to create a valid and robust improvement over time measure, so we will have to wait a bit longer before we can start to judge whether MATs are improving from one year to the next.

Therefore, we are still missing a robust way to look at the impact of MATs on pupil outcomes over time. However, NFER will be digging into the data DfE have published today to look at what more we can obtain about MAT performance. Look out for our final blog in the series reporting this analysis.