The Teaching Excellence and Student Outcomes Framework represents a lamentable failure to engage in critical thinking, argues Professor DV Bishop


If you criticise the Teaching Excellence and Student Outcomes Framework (TEF), you run the risk of being dismissed as an idle, port-quaffing don, who dislikes teaching, is out-of-touch with students and resistant to any transparency or scrutiny.  It’s important, therefore, to emphasise that CDBU has ‘teaching’ or ‘students’ in four of its nine aims. Without students, our universities would be nothing, and fostering the intellectual skills of the next generation is one of the most satisfying parts of an academic job. The TEF, however, is an ill-conceived, simplistic solution to a wrongly-posed problem, which – as Norman Gowar noted in his blogpost –  risks damaging the higher education sector.

The market-driven ethos of TEF, bad though that is, is not the only issue. Even when evaluated against its own objectives, TEF is a miserable apology for a process, which shows no evidence of the critical thinking that we are encouraged to engender in our students.

I’ve written numerous commentaries on different aspects of TEF over the years, with my views being summarised in this CDBU talk  last November. As I start to draft the CDBU’s response to the independent review of the TEF, I’ve found it helpful to summarise the specific ways in which it fails to meet its goals.


Is TEF needed?

When Jo Johnson first introduced the idea of TEF, he pointed to various sources of evidence to support the view that teaching in our universities was in poor shape and needed shaking up:

a) Student dissatisfaction, as evidenced by responses to the National Student Survey (NSS)

b) Student perception of ‘lack of value for money’ of their degree

c) Employer concerns that students had not been suitably trained for the workforce.

Back in 2015, I looked at the evidence he cited for all three claims, and found there was egregious misrepresentation of published data. As far as I am aware, no better evidence has been produced to support these claims since that time.

Another point was that a focus on research had devalued teaching. This was presented by Johnson without any supporting evidence. Nevertheless, most would agree that greater prestige attaches to research than to teaching, and providing strong financial incentives via the Research Excellence Framework (REF) increases the divide between teaching and research. However, it shows a distinct lack of imagination to conclude that the only way to overcome the adverse incentives introduced by the REF is to balance it with another evaluation system that will generate competing incentives linked to teaching. There are, as I propose below, other solutions.


Does the TEF provide a valid measure of teaching quality?

Even the chair of the TEF panel agreed that the answer to this question is No – Chris Husbands noted that student satisfaction is a poor proxy for teaching quality.  Other metrics used in the TEF focus on student outcomes. The modification of the name of the exercise to Teaching Excellence and Student Outcomes is a capitulation on that point, though a more accurate rebrand would be Student Outcomes and Dissatisfaction Framework (SODF).

It is clear from the DfE’s own evaluation of TEF that students don’t understand what TEF is. This is not surprising given the misleading acronym. Two-thirds of those polled assumed that the TEF ratings were based on direct assessment of teaching (p. 85), and 96% endorsed the statement that ‘TEF awards are based on the quality of teaching’ (p. 87). Remarkably, the DfE report treated this as a correct response.


Are students helped by TEF to select a course?

To date, TEF rankings have been made at institutional level, so are not helpful for selecting a course. It’s recognised that what students want is course-level information, and moves are afoot to introduce subject-level TEF.  However, the methodology of TEF, bad as it is, gets even worse when the units of assessment involve small numbers. Which brings us to….


Are sound statistical methods used in TEF?

The answer is No. This is a topic I have blogged about previously, and with the release of new data, I’ve been re-evaluating the situation. The issues are fairly technical and difficult for non-experts to understand, which may be why the architects of TEF have ignored advice from the Royal Statistical Society.

The Royal Statistical Society (RSS) was alarmed by the serious and numerous flaws in the last Teaching Excellence Framework (TEF) consultation process, conducted in 2016. Our concerns appeared not to be adequately addressed by the Department for Education (DfE). Indeed, the DfE’s latest TEF consultation exercise, which will shortly close, suggests that few statistical lessons have been learned from 2016’s experience. As we argue, below, there is a real risk that the latest consultation’s statistically inadequate approach will lead to distorted results, misleading rankings and a system which lacks validity and is unnecessarily vulnerable to being ‘gamed’.

This topic is large enough to merit a separate blogpost (coming soon!), but the bottom line is that the data from NSS are distributed in a way that makes it impossible to make reliable distinctions between the majority of institutions. This problem is exacerbated when the statistics are based on small numbers. Consequently, the idea that the TEF methodology will work at subject level is deeply flawed. There are also concerns about the transparency and reproducibility of the analyses behind TEF.


Are there alternative approaches to achieving TEF’s aims?

The answer is a clear Yes.

First, it’s absolutely right that students need good information before embarking on a course, and they should be encouraged by their schools to seek this out. Students may have different priorities, which is why condensing a wealth of information about NSS responses, drop-outs and employment outcomes into a 3-point scale is unhelpful. Instead, they should be looking at the specific indicators for the course they are considering. Unistats provides that information, in a way that encourages students to compare courses.

Second, the relative undervaluing of teaching relative to research in our universities is reflected in the rise of the ‘academic precariat’, which includes a swathe of teaching staff on insecure contracts. Students are being taught by sessional staff who come and go and may not have an office in the institution. There has always been a place for guest lecturers, or lectures delivered by early-career staff learning on the job, but it seems that nowadays there are students who never get taught by more experienced staff who are committed to the institution and its students. Rather than engaging in tortuous and logically dubious manipulations of results from proxy metrics, we should be providing information on who is doing the teaching, and what proportion of teaching is done by staff on long-term contracts. If this information were added to Unistat, then TEF would become obsolete, and universities would be incentivised to create more security for teaching staff.