Opinion Piece by Derek Sayer*
At the behest of universities minister David Willetts, HEFCE established an Independent review of the Role of Metrics in Research Assessment in April 2014 chaired by James Wilsden. This followed consultations in 2008-9 that played a decisive role in persuading the government to back down on previous plans to replace the RAE with a metrics-based system of research assessment. Wilsden’s call for evidence, which was open from 1 May to 30 June 2014, received 153 responses ‘reflecting a high level of interest and engagement from across the sector’ (Letter to Rt. Hon. Greg Clark MP). Sixty-seven of these were from HEIs, 27 from learned societies and three from mission groups. As in 2008-9, the British academic establishment (including the Russell Group, RCUK, the Royal Society, the British Academy, and the Wellcome Trust) made its voice heard. Predictably, ’57 per cent of the responses expressed overall scepticism about the further introduction of metrics into research assessment,’ while ‘a common theme that emerged was that peer review should be retained as the primary mechanism for evaluating research quality. Both sceptical and supportive responses argued that metrics must not be seen as a substitute for peer review … which should continue to be the “gold standard” for research assessment’ (Wilsden review, Summary of responses to submitted to the call for evidence).
The stock arguments against the use of metrics in research assessment were widely reiterated: journal impact factors cannot be a proxy for quality because ‘high-quality’ journals may still publish poor-quality articles; using citations as a metric ignores negative citation and self-citation; in some humanities and social science disciplines it is more common to produce books than articles, which will significantly reduce their citation counts, and so on. Much of this criticism, I would argue, is a red herring. Most of these points could easily be addressed by anybody who seriously wished to consider how bibliometrics might sensibly inform a research assessment exercise rather than kill any such suggestion at birth (don’t use JIFs, exclude self-citations, use indices like Publish or Perish that include monographs as well as articles and control for disciplinary variations). What is remarkable, however, is that while these faults are often presented as sufficient reason to reject the use of metrics in research assessment out of hand, the virtues of ‘peer review’ are simply assumed by most contributors to this discussion rather than scrutinized or evidenced. This matters because whatever the merits of peer review in the abstract—and there is room for debate on what is by its very nature a subjective process—the evaluation procedures used in REF 2014 (and previous RAEs) not only fail to meet HEFCE’s own claims to provide ‘expert review of the outputs’ but fall far short of internationally accepted norms of peer review.