A whole lotta cheatin’ going on? REF stats revisited

Opinion Piece by Derek Sayer*

Editorial Note: In our previous blogpost, we noted that while there was agreement that REF2014 was problematic, there was less agreement about alternatives. To make progress, we need more debate. We hope that this piece by Derek Sayer will stimulate this, and we welcome comments. Please note that comments are moderated and will not be published immediately.

1.

The rankings produced by Times Higher Education and others on the basis of the UK’s Research Assessment Exercises (RAEs) have always been contentious, but accusations of universities’ gaming submissions and spinning results have been more widespread in REF2014 than any earlier RAE. Laurie Taylor’s jibe in The Poppletonian that ‘a grand total of 32 vice-chancellors have reportedly boasted in internal emails that their university has become a top 10 UK university based on the recent results of the REF’[i] rings true in a world in which Cardiff University can truthfully[ii] claim that it ‘has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise’ from 22nd in RAE2008. Cardiff ranks 5th among universities in the REF2014 ‘Table of Excellence,’ which is based on the GPA of the scores assigned by the REF’s ‘expert panels’ to the three elements in each university’s submission (outputs 65%, impact 20%, environment 15%)—just behind Imperial, LSE, Oxford and Cambridge. Whether this ‘confirms [Cardiff’s] place as a world-leading university,’ as its website claims, is more questionable.[iii]  These figures are a minefield.

Although HEFCE encouraged universities to be ‘inclusive’ in entering their staff in REF2014, they were not obliged to return all eligible staff and there were good reasons for those with aspirations to climb the league tables to be more ‘strategic’ in staff selection than in previous RAEs. Prominent among these were (1) HEFCE’s defunding of 2* outputs from 2011, which meant outputs scoring below 3* would now negatively affect a university’s rank order without any compensating gain in QR income, and (2) HEFCE’s pegging the number of impact case studies required to the number of staff members entered per unit of assessment, which created a perverse incentive to exclude research-active staff if this would avoid having to submit a weak impact case study.[iv] Though the wholesale exclusions feared by some did not materialize across the sector, it is clear that some institutions were far more selective in REF2014 than in RAE2008.

Unfortunately data that would have permitted direct comparisons with numbers of staff entered by individual universities in RAE2008 were never published, but Higher Education Statistical Authority (HESA) figures for FTE staff eligible to be submitted allow broad comparisons across universities in REF2014. It is evident from these that selectivity, rather than an improvement in research quality per se, played a large part in Cardiff’s ‘meteoric rise’ in the rankings. The same may be true for some other schools that significantly improved their positions, among them Kings (up to 7th in 2014 from 22= in 2008), Bath (14= from 20=), Swansea (22= from 56=), Cranfield (31= from 49), Heriot-Watt (33 from 45), and Aston (35= from 52=).  All of these universities except Kings entered fewer than 75% of their eligible staff members, and Kings has the lowest percentage (80%) of any university in the REF top 10 other than Cardiff itself.

Continue reading

Reflections on the REF and the need for change

Discussion piece by the CDBU Steering Group

possible picture for header

Results from the research excellence framework (REF) were publicly announced on 18th December, followed by a spate of triumphalist messages from University PR departments. Deeper analysis followed, both in the pages of the Times Higher Education, and in the media and on blogs.

CDBU has from the outset expressed concern about the REF, much of it consistent with the criticism that has been expressed elsewhere. In particular, we note:

Inefficiency: As Derek Sayer has noted, the REF has absorbed a great deal of time and money that might have been spent better elsewhere. The precise cost has yet to be reported, but it is likely to be greater than the £60m official figure, and that is not taking into account the cost in terms of the time of academic staff. Universities have taken on new staff to do the laborious work of compiling data and writing impact statements, but this has diverted funds from front-line academia and increased administrative bloat.

Questionable validity: Derek Sayer has cogently argued the case that the peer review element of REF is open to bias from subjective, idiosyncratic and inexpert opinions. It is also unaccountable in the sense that ratings made of individual outputs are destroyed. One can see why this is done: otherwise HEFCE could be inundated with requests for information and appeals. But if the raw data is not available, then this does not inspire confidence in the process, especially when there are widespread accusations of games-playing and grade inflation.

Concentration of funding in a few institutions: We are told that the goal is to award quality-related funding, but as currently implemented, this leads inevitably to a process whereby the rich get richer and the poor get poorer, with the bulk of funds concentrated in a few institutions. We suspect that the intention of including ‘impact’ in the REF was to reduce the disparity between the Golden Triangle (Oxford, Cambridge and London) and other institutions which might be doing excellent applied work, but if anything the opposite has happened. We do not yet know what the funding formula will be, but if it is, as widely predicted, heavily biased in favour of 4* research, we could move to a situation where only the large institutions will survive to be research-active. There has been no discussion of whether such an outcome is desirable.

Shifting the balance of funding across disciplines: A recent article in the Times Higher Education noted another issue: the tendency for those in the Sciences to obtain higher scores on the REF than those in the Humanities. Quotes from HEFCE officials in the article offered no reassurance to those who were concerned this could mean a cut in funding for humanities. Such a move, if accompanied by changes to student funding to advantage those in STEM subjects, could dramatically reduce the strength of Humanities in the UK.

Continue reading

Staff satisfaction is as important as student satisfaction

Opinion piece by Dorothy Bishop, 13 November 2014

Universities have become obsessed with competition: it is no longer enough to do well; you have to demonstrate you are better than the rest. And to do that, you need some kind of metric. Organisations have grown up to meet this need, and to produce league tables that compare institutions on a range of characteristics, including research excellence, reputation and teaching.

The National Student Survey has become established as a major component of this process. It has run annually across all publicly funded Higher Education Institutions (HEIs) in the UK. It features prominently in student guides to the best universities, such as this one by the Guardian. There is no doubt that the survey has made universities more responsive to student views, and it is to be welcomed that reported student satisfaction levels have increased since the survey was introduced. Nevertheless, some, like Arti Agrawal have expressed concerns about universities introducing quick fixes that may produce higher ratings in the short term, but lower academic quality overall: ‘With increased tuition fees, students are seen as customers who must be kept happy, and the NSS is now a customer satisfaction survey’. We even have evidence that within some universities, student satisfaction is used as an index of the quality of the teaching staff.

It is perhaps not surprising then, that as the same time as we are told that students are getting happier and happier, academic staff seem to be growing ever more miserable. Now this could, of course, just be down to the fact that everyone likes a good moan1. But the impression one gets from reading the Times Higher Education and looking at stories anonymously contributed to CDBU’s Record the Rot archive is that there is more to it than that. The very same pressures that lead managers to treat students as consumers have led them to treat academic staff as dispensible ‘human resources’. The view of universities as institutions in constant competition with one another and the rest of the world has trickled down to the departmental level, destroying any sense of collegiality. In the long run, if teaching is done by a body of demoralised and ever-changing academics, this can only be bad for staff and students alike.

But this is only anecdote, and it would be good to have some data. The Times Higher Education started a Best Workplace Survey last year, which has the potential to provide just that. However, the sample was relatively small and self-selected. Findings such as 39 per cent of academics felt their health was negatively affected by their work, and one third felt their job was not secure are hard to interpret given the vagaries of sampling. Is this typical, or was it the most disaffected who replied? Concerns about the low response rate and potential for bias meant that the THE decided not to report results by institution. My guess is that if we had proper survey data, and if staff satisfaction were incorporated into ‘best university’ rankings, then rank orderings might change quite dramatically. Furthermore, institutions sacked staff to improve rankings might find their strategy backfiring.

The THE’s workplace survey for 2015 is now live. I would encourage everyone working in higher education to take part, whether or not you have something you want to moan about. We need to get an adequate database on this topic so that we can have a solid basis for identifying those institutions that are genuinely at the top of the league, in terms of their treatment of staff, versus those who achieve a high status on other indicators while presiding over an anxious and demoralised staff.

1 Especially the English. I can thoroughly recommend this book for an amusing and informative account: Fox, K. (2005). Watching the English: The Hidden Rules of English Behaviour. London: Hodder & Stoughton.