CDBU Response to REF Review

In December, Jo Johnson, the Universities and Science Minister, launched a review of university research funding. The goals of the review appear well-aligned with those of CDBU: ‘to cut red tape so that universities can focus more on delivering the world-leading research for which the UK is renowned’. The review is chaired by the President of the British Academy, Lord Stern. CDBU drafted an initial response to the call for evidence, which was then circulated to members for comment before being submitted last week. The full response can be downloaded here.

The main points can be summarised as follows:

The committee needs to take a close look at the purpose of the REF; there has been considerable mission creep and it is trying to do too many different things. Its cost-effectiveness has never been properly evaluated.

Specific suggestions are:

  • Reward institutions that have high levels of staff satisfaction.
  • Reward institutions that foster early-career researchers.
  • Consider a system where funding reflects the number of research-active staff who have contracts extending into the future.
  • Do not use incentives that treat accumulation of research funding as an end in itself.
  • Do not award a higher proportion of funding on the basis of impact.
  • Do not rely on competition to drive up standards: create incentives for more co-operation both within and between institutions.
  • Take steps to encourage a diverse research landscape, rather than creating further concentration of research in a few institutions.
  • Be vigilant about the dangers of introducing criteria that might work well in one discipline but be unsuitable for others

We thank those CDBU members who contributed to our submission, and look forward to covering future developments on our website.

The shaky foundations of the TEF: neither logically nor practically defensible

*Opinion piece by Dorothy Bishop

I spent Sunday reading the Green Paper “Fulfilling our Potential: Teaching Excellence, Social Mobility and Student Choice“, a consultation document that outlines radical plans to change how universities are evaluated and funded. The CDBU is preparing a response, but here’s the problem. BIS is not seeking views on whether the new structures they plan to introduce are a good idea. They are telling us that they are a good idea, a necessary idea, and an idea that they will implement. The consultation is to ask for views on details of that implementation.

The government will no doubt be braced for howls of protest from the usual suspects. Academics are notorious for resisting change, so there is an expectation that there will be opposition from many of the rank and file who work in universities, especially from those whose political allegiances are left of centre. CDBU is, however, a broad church, and disquiet with the Green Paper comes from academics covering a wide range of political views.

The idea behind the TEF is that teaching has not been taken seriously enough in our Universities, because they have been fixated on research. As a consequence, students are getting a raw deal and employers are dissatisfied that graduates are not adequately prepared for the workplace. However, the evidence for these assertions is pretty shaky. If you’re going to introduce a whole new administrative machinery, then you have to demonstrate that it will fix a problem. A number of commentators have warned that TEF is a solution to a problem that does not exist.

Continue reading

Reflections on the Green Paper: The Teaching Excellence Framework

Opinion Piece by Roger Brown

Introduction

In Everything for Sale? with Helen Carasso (Routledge, 2013) the writer argued that the main changes in higher education policy over the past thirty or so years could be explained in terms of the progressive marketisation of the system by governments of all political persuasions, a process that began with the Thatcher Government’s abolition of the subsidy for overseas students from 1980. The Green Paper Fulfilling Our Potential: Teaching Excellence, Social Mobility and Student Choice (BIS, 2015) published on 6th November represents the latest stage in this process. This short paper offers an initial assessment of the main proposal: the introduction of a Teaching Excellence Framework.

The Teaching Excellence Framework

The Green Paper implies that both quality of and participation in higher education have increased since the full fee regime came into effect in 2012. However:

More needs to be done to ensure that providers offering the highest quality courses are recognised and that teaching is valued as much as research. Students expect better value for money; employers need access to a pipeline of graduates with the skills they need; and the taxpayer needs to see a broad range of economic and social benefits generated by the public investment in our higher education system (page 18).

The main proposal for achieving these is the Teaching Excellence Framework (TEF). We are told that:

The TEF should change providers’ behaviour. Those providers that do well within the TEF will attract more student applications and will be able to raise fees in line with inflation. The additional income can be reinvested in the quality of teaching and allow providers to expand so that they can teach more students. We hope providers receiving a lower TEF assessment will choose to raise their teaching standards in order to maintain student numbers. Eventually, we anticipate some lower quality providers withdrawing from the sector, leaving space for new entrants, and raising quality overall. (page 19)  

Continue reading

University teachers: an endangered species?

Opinion Piece by Marina Warner

Current policies are imposing business practices on education, and the consequences are blighting the profession, and will continue to inflict ever deeper blight on the people engaged in it – at all levels. The relinquishing of financial support by the state is not accompanied by diminution of authority: indeed the huge expansion of management follows from direct state interference in education as well as other essential elements of a thriving society.

Last September I wrote an article for the London Review of Books about my departure from the University of Essex, followed by another piece in March reflecting on the perversion of UK Higher Education. The responses I had to these articles came from people at every stage of the profession. I had feared that I was a nostalgic humanist, but if I am, the ideals of my generation have not died. Access to education to high standards fits very ill with business models – as the strong drift towards removing the cap on fees shows. The result of the market will be an ever-deepening divide between elite universities at one end and ‘sink’ institutions at the other.

I am going to focus on those who fulfil the prime purpose of the whole endeavour; that is those who pass on their knowledge and foster the spirit of inquiry and understanding in their students: the teachers.

First, the policies that are now being discussed, changing the rules regarding Further Education in particular, will need more and more teachers. Yet throughout the profession there is a shortage, and the toll taken on those who do teach in higher education is heavy and growing heavier – economically, psychologically, socially.

Continue reading

A whole lotta cheatin’ going on? REF stats revisited

Opinion Piece by Derek Sayer*

Editorial Note: In our previous blogpost, we noted that while there was agreement that REF2014 was problematic, there was less agreement about alternatives. To make progress, we need more debate. We hope that this piece by Derek Sayer will stimulate this, and we welcome comments. Please note that comments are moderated and will not be published immediately.

1.

The rankings produced by Times Higher Education and others on the basis of the UK’s Research Assessment Exercises (RAEs) have always been contentious, but accusations of universities’ gaming submissions and spinning results have been more widespread in REF2014 than any earlier RAE. Laurie Taylor’s jibe in The Poppletonian that ‘a grand total of 32 vice-chancellors have reportedly boasted in internal emails that their university has become a top 10 UK university based on the recent results of the REF’[i] rings true in a world in which Cardiff University can truthfully[ii] claim that it ‘has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise’ from 22nd in RAE2008. Cardiff ranks 5th among universities in the REF2014 ‘Table of Excellence,’ which is based on the GPA of the scores assigned by the REF’s ‘expert panels’ to the three elements in each university’s submission (outputs 65%, impact 20%, environment 15%)—just behind Imperial, LSE, Oxford and Cambridge. Whether this ‘confirms [Cardiff’s] place as a world-leading university,’ as its website claims, is more questionable.[iii]  These figures are a minefield.

Although HEFCE encouraged universities to be ‘inclusive’ in entering their staff in REF2014, they were not obliged to return all eligible staff and there were good reasons for those with aspirations to climb the league tables to be more ‘strategic’ in staff selection than in previous RAEs. Prominent among these were (1) HEFCE’s defunding of 2* outputs from 2011, which meant outputs scoring below 3* would now negatively affect a university’s rank order without any compensating gain in QR income, and (2) HEFCE’s pegging the number of impact case studies required to the number of staff members entered per unit of assessment, which created a perverse incentive to exclude research-active staff if this would avoid having to submit a weak impact case study.[iv] Though the wholesale exclusions feared by some did not materialize across the sector, it is clear that some institutions were far more selective in REF2014 than in RAE2008.

Unfortunately data that would have permitted direct comparisons with numbers of staff entered by individual universities in RAE2008 were never published, but Higher Education Statistical Authority (HESA) figures for FTE staff eligible to be submitted allow broad comparisons across universities in REF2014. It is evident from these that selectivity, rather than an improvement in research quality per se, played a large part in Cardiff’s ‘meteoric rise’ in the rankings. The same may be true for some other schools that significantly improved their positions, among them Kings (up to 7th in 2014 from 22= in 2008), Bath (14= from 20=), Swansea (22= from 56=), Cranfield (31= from 49), Heriot-Watt (33 from 45), and Aston (35= from 52=).  All of these universities except Kings entered fewer than 75% of their eligible staff members, and Kings has the lowest percentage (80%) of any university in the REF top 10 other than Cardiff itself.

Continue reading

Reflections on the REF and the need for change

Discussion piece by the CDBU Steering Group

possible picture for header

Results from the research excellence framework (REF) were publicly announced on 18th December, followed by a spate of triumphalist messages from University PR departments. Deeper analysis followed, both in the pages of the Times Higher Education, and in the media and on blogs.

CDBU has from the outset expressed concern about the REF, much of it consistent with the criticism that has been expressed elsewhere. In particular, we note:

Inefficiency: As Derek Sayer has noted, the REF has absorbed a great deal of time and money that might have been spent better elsewhere. The precise cost has yet to be reported, but it is likely to be greater than the £60m official figure, and that is not taking into account the cost in terms of the time of academic staff. Universities have taken on new staff to do the laborious work of compiling data and writing impact statements, but this has diverted funds from front-line academia and increased administrative bloat.

Questionable validity: Derek Sayer has cogently argued the case that the peer review element of REF is open to bias from subjective, idiosyncratic and inexpert opinions. It is also unaccountable in the sense that ratings made of individual outputs are destroyed. One can see why this is done: otherwise HEFCE could be inundated with requests for information and appeals. But if the raw data is not available, then this does not inspire confidence in the process, especially when there are widespread accusations of games-playing and grade inflation.

Concentration of funding in a few institutions: We are told that the goal is to award quality-related funding, but as currently implemented, this leads inevitably to a process whereby the rich get richer and the poor get poorer, with the bulk of funds concentrated in a few institutions. We suspect that the intention of including ‘impact’ in the REF was to reduce the disparity between the Golden Triangle (Oxford, Cambridge and London) and other institutions which might be doing excellent applied work, but if anything the opposite has happened. We do not yet know what the funding formula will be, but if it is, as widely predicted, heavily biased in favour of 4* research, we could move to a situation where only the large institutions will survive to be research-active. There has been no discussion of whether such an outcome is desirable.

Shifting the balance of funding across disciplines: A recent article in the Times Higher Education noted another issue: the tendency for those in the Sciences to obtain higher scores on the REF than those in the Humanities. Quotes from HEFCE officials in the article offered no reassurance to those who were concerned this could mean a cut in funding for humanities. Such a move, if accompanied by changes to student funding to advantage those in STEM subjects, could dramatically reduce the strength of Humanities in the UK.

Continue reading

Problems with Peer Review for the REF

Opinion Piece by Derek Sayer* 

At the behest of universities minister David Willetts, HEFCE established an Independent review of the Role of Metrics in Research Assessment in April 2014 chaired by James Wilsden. This followed consultations in 2008-9 that played a decisive role in persuading the government to back down on previous plans to replace the RAE with a metrics-based system of research assessment. Wilsden’s call for evidence, which was open from 1 May to 30 June 2014, received 153 responses ‘reflecting a high level of interest and engagement from across the sector’ (Letter to Rt. Hon. Greg Clark MP). Sixty-seven of these were from HEIs, 27 from learned societies and three from mission groups. As in 2008-9, the British academic establishment (including the Russell Group, RCUK, the Royal Society, the British Academy, and the Wellcome Trust) made its voice heard. Predictably, ’57 per cent of the responses expressed overall scepticism about the further introduction of metrics into research assessment,’ while ‘a common theme that emerged was that peer review should be retained as the primary mechanism for evaluating research quality. Both sceptical and supportive responses argued that metrics must not be seen as a substitute for peer review … which should continue to be the “gold standard” for research assessment’ (Wilsden review, Summary of responses to submitted to the call for evidence).

The stock arguments against the use of metrics in research assessment were widely reiterated: journal impact factors cannot be a proxy for quality because ‘high-quality’ journals may still publish poor-quality articles; using citations as a metric ignores negative citation and self-citation; in some humanities and social science disciplines it is more common to produce books than articles, which will significantly reduce their citation counts, and so on. Much of this criticism, I would argue, is a red herring. Most of these points could easily be addressed by anybody who seriously wished to consider how bibliometrics might sensibly inform a research assessment exercise rather than kill any such suggestion at birth (don’t use JIFs, exclude self-citations, use indices like Publish or Perish that include monographs as well as articles and control for disciplinary variations). What is remarkable, however, is that while these faults are often presented as sufficient reason to reject the use of metrics in research assessment out of hand, the virtues of ‘peer review’ are simply assumed by most contributors to this discussion rather than scrutinized or evidenced. This matters because whatever the merits of peer review in the abstract—and there is room for debate on what is by its very nature a subjective process—the evaluation procedures used in REF 2014 (and previous RAEs) not only fail to meet HEFCE’s own claims to provide ‘expert review of the outputs’ but fall far short of internationally accepted norms of peer review.

Continue reading