Now that all the data used to inform TEF ratings is publicly available, argues Professor D V M Bishop, TEF itself becomes redundant


I have been vocal in my criticism of the Teaching Excellence Framework (TEF): its rationale is disingenuous, it is based on measures that manage to be simultaneously invalid and unreliable, and it risks damaging the reputation of the UK higher education sector.

A major argument for the TEF is that students need better information about the courses they embark on. Indeed, those who have criticised the TEF have been accused of opposing transparency and accountability. In fact, I welcome students being informed about their courses, but I’m opposed to reducing this to a rating on a three-point scale (gold, silver or bronze) based on aggregating information from disparate and often unreliable sources of information, none of which measures teaching quality.

A great deal of information about courses has been available for some years on the Unistats website. This week, Unistats was in the news because data from Longitudinal Earnings Outcomes (LEO) has been added, enabling prospective students to look at data on average earnings by subject level for individual courses. This means that all the component information that was aggregated to obtain a TEF banding is now publicly available: a prospective student can readily compare courses in terms of the National Student Survey responses, dropout rates, and earnings and employment status of graduates.


Is there any longer a point to TEF?

So this raises the obvious question: if students can consult the disaggregated data on the individual measures that are input into TEF, why do we need to go through the expensive exercise of combining that information into a vacuous index?

A major objection to TEF was the inclusion of earnings data: the crass assumption that the value of teaching can be estimated by how much students earn was rightly pilloried. In addition, future earnings will be influenced by social trends that are outside the control of any of us, and it is rash to give students the idea that a degree guarantees a well-paid job. Nevertheless, information about earnings is not totally irrelevant. A student who embarks on a career in accountancy, law or pharmacy may be rightly concerned if they see that half of previous graduates from a course did not go on to a professional job, or that many were unemployed. Information on drop-out rates is also relevant here. Caution is needed because, at subject level, averages are often based on small sample sizes and so have a wide margin of error. Unistats deals with this by offering warnings about sample sizes, and omitting statistics or aggregating data where numbers are small. It also offers cautions about the interpretation of the LEO data. One reason I prefer Unistats to TEF is that you can get an idea of where the underlying data are limited or inadequate, instead of having this information masked in an aggregate.


Let’s get rid of a mis-shapen monster

One of Jo Johnson’s goals in developing TEF was to crack down on dodgy course providers whose main motivation in enrolling students was to access the fees, and who did not provide a good educational experience: currently in the UK, these seem rare, but they do exist. TEF was a cumbersome attempt to identify these outliers, which achieved little and has potential to cause damage to reputable institutions. Encouraging prospective students to consult the Unistats data might enable them to avoid those institutions that over-promise and don’t deliver.

A concern about Unistats is that Goodhart’s law (“when a measure becomes a target, it ceases to be a good measure”) will inevitably apply, with institutions seeking ingenious ways to improve their statistics. There are already concerns about grade inflation being the consequence of such practices. The casualties are the staff who are then pressured to achieve targets that are not aligned with academic values.

I’ve previously suggested a fix for this unintended consequence, which is to include measures of staff satisfaction when evaluating higher education courses. This is highly relevant information for prospective students: those who are taught by demoralised, overworked academics are likely to have a poor experience in higher education.

So let’s provide transparent data on indicators that students are interested in and get rid of the TEF, a misshapen monster that should have been strangled at birth.