Problems with Peer Review for the REF

Opinion Piece by Derek Sayer* 

At the behest of universities minister David Willetts, HEFCE established an Independent review of the Role of Metrics in Research Assessment in April 2014 chaired by James Wilsden. This followed consultations in 2008-9 that played a decisive role in persuading the government to back down on previous plans to replace the RAE with a metrics-based system of research assessment. Wilsden’s call for evidence, which was open from 1 May to 30 June 2014, received 153 responses ‘reflecting a high level of interest and engagement from across the sector’ (Letter to Rt. Hon. Greg Clark MP). Sixty-seven of these were from HEIs, 27 from learned societies and three from mission groups. As in 2008-9, the British academic establishment (including the Russell Group, RCUK, the Royal Society, the British Academy, and the Wellcome Trust) made its voice heard. Predictably, ’57 per cent of the responses expressed overall scepticism about the further introduction of metrics into research assessment,’ while ‘a common theme that emerged was that peer review should be retained as the primary mechanism for evaluating research quality. Both sceptical and supportive responses argued that metrics must not be seen as a substitute for peer review … which should continue to be the “gold standard” for research assessment’ (Wilsden review, Summary of responses to submitted to the call for evidence).

The stock arguments against the use of metrics in research assessment were widely reiterated: journal impact factors cannot be a proxy for quality because ‘high-quality’ journals may still publish poor-quality articles; using citations as a metric ignores negative citation and self-citation; in some humanities and social science disciplines it is more common to produce books than articles, which will significantly reduce their citation counts, and so on. Much of this criticism, I would argue, is a red herring. Most of these points could easily be addressed by anybody who seriously wished to consider how bibliometrics might sensibly inform a research assessment exercise rather than kill any such suggestion at birth (don’t use JIFs, exclude self-citations, use indices like Publish or Perish that include monographs as well as articles and control for disciplinary variations). What is remarkable, however, is that while these faults are often presented as sufficient reason to reject the use of metrics in research assessment out of hand, the virtues of ‘peer review’ are simply assumed by most contributors to this discussion rather than scrutinized or evidenced. This matters because whatever the merits of peer review in the abstract—and there is room for debate on what is by its very nature a subjective process—the evaluation procedures used in REF 2014 (and previous RAEs) not only fail to meet HEFCE’s own claims to provide ‘expert review of the outputs’ but fall far short of internationally accepted norms of peer review.

Continue reading

Staff satisfaction is as important as student satisfaction

Opinion piece by Dorothy Bishop, 13 November 2014

Universities have become obsessed with competition: it is no longer enough to do well; you have to demonstrate you are better than the rest. And to do that, you need some kind of metric. Organisations have grown up to meet this need, and to produce league tables that compare institutions on a range of characteristics, including research excellence, reputation and teaching.

The National Student Survey has become established as a major component of this process. It has run annually across all publicly funded Higher Education Institutions (HEIs) in the UK. It features prominently in student guides to the best universities, such as this one by the Guardian. There is no doubt that the survey has made universities more responsive to student views, and it is to be welcomed that reported student satisfaction levels have increased since the survey was introduced. Nevertheless, some, like Arti Agrawal have expressed concerns about universities introducing quick fixes that may produce higher ratings in the short term, but lower academic quality overall: ‘With increased tuition fees, students are seen as customers who must be kept happy, and the NSS is now a customer satisfaction survey’. We even have evidence that within some universities, student satisfaction is used as an index of the quality of the teaching staff.

It is perhaps not surprising then, that as the same time as we are told that students are getting happier and happier, academic staff seem to be growing ever more miserable. Now this could, of course, just be down to the fact that everyone likes a good moan1. But the impression one gets from reading the Times Higher Education and looking at stories anonymously contributed to CDBU’s Record the Rot archive is that there is more to it than that. The very same pressures that lead managers to treat students as consumers have led them to treat academic staff as dispensible ‘human resources’. The view of universities as institutions in constant competition with one another and the rest of the world has trickled down to the departmental level, destroying any sense of collegiality. In the long run, if teaching is done by a body of demoralised and ever-changing academics, this can only be bad for staff and students alike.

But this is only anecdote, and it would be good to have some data. The Times Higher Education started a Best Workplace Survey last year, which has the potential to provide just that. However, the sample was relatively small and self-selected. Findings such as 39 per cent of academics felt their health was negatively affected by their work, and one third felt their job was not secure are hard to interpret given the vagaries of sampling. Is this typical, or was it the most disaffected who replied? Concerns about the low response rate and potential for bias meant that the THE decided not to report results by institution. My guess is that if we had proper survey data, and if staff satisfaction were incorporated into ‘best university’ rankings, then rank orderings might change quite dramatically. Furthermore, institutions sacked staff to improve rankings might find their strategy backfiring.

The THE’s workplace survey for 2015 is now live. I would encourage everyone working in higher education to take part, whether or not you have something you want to moan about. We need to get an adequate database on this topic so that we can have a solid basis for identifying those institutions that are genuinely at the top of the league, in terms of their treatment of staff, versus those who achieve a high status on other indicators while presiding over an anxious and demoralised staff.

1 Especially the English. I can thoroughly recommend this book for an amusing and informative account: Fox, K. (2005). Watching the English: The Hidden Rules of English Behaviour. London: Hodder & Stoughton.