By the CDBU Steering Committee
If you work in the sciences, you will be all too aware of the journal impact factor (JIF). The requirement for ‘publications in high impact journals’ has become a staple of job advertisements, and the achievement of this goal is emblazoned across research group websites as evidence of gloriousness.
The strange thing is that the validity of JIF has been questioned for many years. JIF is a bibliometric measure that was designed to help librarians decide which journals were most likely to be worth stocking. As Stephen Curry noted in 2013, even in that capacity it has been found wanting, and it certainly was never intended to be used to rate quality of individual research papers. Indeed, there are arguments that ‘high impact’ journals are more likely than other journals to publish papers that report dramatic findings that are unlikely to replicate, and to use editors and reviewers who lack expertise in the subject – the LSE blog on impact of social sciences has gathered a number of useful links on this topic.
So why are universities still taking JIF so seriously? It can’t be blamed on the REF. The Higher Education Funding Council for England (HEFCE) explicitly stated that for REF2014: “No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs.”
Perhaps universities are under the impression that publications in high impact journals are needed to be competitive in getting grants? Yet the Wellcome Trust is very clear that “it is the intrinsic merit of the work, and not the title of the journal or the publisher with which an author’s work is published, that should be considered in making funding decisions.”
Nevertheless, it’s hard to shake the widespread belief that publication in a high impact journal is some measure of intrinsic quality of research. As Sir Paul Nurse, President of the Royal Society, commented last week at a meeting on the Future of Scholarly Scientific Communication:
“Measurement of research quality is very important to science and it’s important for career advancement – but the emphasis on impact factor as often the only metric has its issues …. I get so cross when I’m in a reviewing committee, and I read letters or hear people saying, ‘X has published in Nature, Science…’ or whatever – as if this, by itself, means something. The laziness of senior colleagues in looking at this is completely extraordinary.”
One way to change matters would be for UK Universities to sign up to the San Francisco Declaration on Research Assessment. Remarkably, although DORA signatories include such august bodies as the Royal Society, the British Academy, the Wellcome Trust and HEFCE, only two UK universities are verified signatories: University of Sussex and University College London.
The Council for Defence of British Universities has now signed DORA, and we would urge others to persuade their Universities to do the same.