Changes to the 2021 Research Evaluation Framework will increase the stress on academics and strengthen the hand of the managerial elite, argues Josh Robinson, lecturer in English literature at Cardiff University 

Two of the changes implemented for the 2021 Research Excellence Framework (REF) in the wake of the recommendations of the Stern Review have reversed a three-decade trend in the public evaluation of research in UK universities. The reduction in ‘outputs’ from 4 to 2.5 per submitted researcher, and their disaggregration from individual researchers, initially bring some respite in the intensification of the research assessment process since the 1986 Research Selectivity Exercise. However, a closer look reveals some troubling implications of these changes in terms of the obligations and incentives they place on institutional managers in charge of the submission (most often at departmental level), and consequently on individual researchers.

Stern identified the dangers of ‘the practice of making a highly selective submission to the REF that does not represent the overall research activity in that area in the institution’. Previously, the consultation that led to the first REF noted that ‘it is not necessary (even if it were feasible in practice) to consider the work of staff who have not engaged in significant research activity of high quality’. The Stern review, in contrast, declared that for the 2021 exercise ‘[i]t is important that all academic staff who have any significant responsibility to undertake research are returned to the REF’.

Stakes have changed for managers

One effect of including all staff is to remove the perversely misaligned incentive whereby the best way to improve an institution’s position in the headline table of grade point average (GPA) for ‘outputs’ was not to submit all its active researchers (thus losing potential research income). But the shift to the inclusion of all active researchers has also changed the stakes for those managers who wish (or who have been told) to move their departments up the rankings. Previously they had the option of setting a minimum threshold predicted score for publications and submitting those who were ranked above it. This practice of exclusion was frequently an unpleasant process, particularly for those who were deemed to be below the imposed threshold, and often had career-damaging consequences.

With the mandatory submission of all individuals, managers who are keen to move up the rankings have to pay more attention to the research of each individual with a contractual responsibility for it. Whereas this might, in a hypothetical world, result in increased resources to carry out research, what it in fact involves is the intensification of expectations and of the extent to which they are enforced, usually without improved conditions for actually conducting this research.

Colleagues are pitted against each other

What the response to the mandatory inclusion of all researchers produces is a set of mechanisms whereby colleagues’ work is continually evaluated, outwith the regular processes of peer review. ‘Dry runs’ and ‘Rolling REFs’ proliferate. We are witnessing increasingly pervasive and increasingly intrusive mechanisms whereby colleagues are required to submit work for evaluation on a regular basis, and indeed strongly encouraged to act on colleagues’ instructions for improvement before submitting it. Indeed, for colleagues deemed not to have met the required standard, this encouragement becomes an instruction. Moreover, colleagues are set against one another to a greater degree than was previously the case. For whoever is responsible for the submission to the 2021 REF must first identify each individual’s strongest publication, and then identify the strongest possible balance of the required submission among the rest of the outputs across the department. Such a calculation necessitates making decisions not only as to which publications to submit, but also as to whose, and therefore an evaluative approach that ranks colleagues’ work against one another’s.

The REF does not assess the quality of research

This competition between colleagues is implemented not for the sake of the advancement of knowledge, but in pursuit of higher REF rankings – and while these are in some cases aligned, where there is a conflict it is the latter that takes precedence. As Derek Sayer has shown, the REF does not and cannot assess the quality of research, and the judgements of its panellists fall well short of any internationally accepted standard of peer-review: the rôle of the panellists involves making hasty assessments of work that is frequently outside their expertise. But worse than this, the continuous internal evaluation of colleagues’ ‘outputs’ is carried out not by experts in the field (as in any peer-review process worthy of its name) or even by the officially and publicly anointed members of the ‘expert panel’, but by a set of people appointed by institutional managers, whose brief is to second-guess the (themselves overly hasty and frequently non-expert) judgements of the panellists.

These appointed assessors are frequently anonymous – sometimes department members, sometimes ‘critical friends’. And because the evaluations of each publication made by the panel in the REF itself are never made public (all that is released is a breakdown of how many of each submissions outputs were given which grade), it is the frequently unaccountable, often unchallengeable judgement of the internally appointed assessor that holds sway – a worrying concentration of power.

Bias-prone second guessings become a proxy for assessing merit

Across the UK’s universities, colleagues and UCU branches are hearing of the consequences of hugely critical evaluations made by these internal readers, often with little relevant qualification in the specialisms. Then there is the question of the biases of whoever is appointed to be these readers. These biases are frequently compatible with those of whoever appoints the assessors – along the lines of who has a reputation for being a high-flier (or not), whose research or specialism is marginalized within the department, or even who is bullied and disliked. These judgements – not even those of the REF, but those of a managerial second-guessing of its evaluations – are what are used in a comprehensive system of internal nudges and incentives, and occasionally threats. These bias-prone second-guessings of the inscrutable judgements of an expert panel of the great and the good become a proxy or (worse) a substitute for any assessment of the merits of somebody’s research. And this assessment has more and more consequences for colleagues’ careers, as the internal predictions of the REF-ranking of a piece of work are solidified into promotion criteria and increasingly codified sets of performance expectations, backed up by intrusive performance management, often euphemistically termed ‘support’.

Sayer describes how the REF and its predecessors ‘have been perhaps the key means of maintaining’ the domination over the HE sector by ‘the country’s traditional academic elites’. The systems of internal review emerging in the institutional responses to the changes have further strengthened the hand of a managerial caste within this relatively narrow set of institutions that brand themselves as ‘elite’—and the reforms made in the light of the Stern review look set to intensify the entrenchment both of the narrow set of institutions that present and perpetuate themselves as the ‘elite’, and of this managerial caste within them.

You can follow Josh Robinson on Twitter at @jshrbnsn