Discussion piece by the CDBU Steering Group
Results from the research excellence framework (REF) were publicly announced on 18th December, followed by a spate of triumphalist messages from University PR departments. Deeper analysis followed, both in the pages of the Times Higher Education, and in the media and on blogs.
CDBU has from the outset expressed concern about the REF, much of it consistent with the criticism that has been expressed elsewhere. In particular, we note:
Inefficiency: As Derek Sayer has noted, the REF has absorbed a great deal of time and money that might have been spent better elsewhere. The precise cost has yet to be reported, but it is likely to be greater than the £60m official figure, and that is not taking into account the cost in terms of the time of academic staff. Universities have taken on new staff to do the laborious work of compiling data and writing impact statements, but this has diverted funds from front-line academia and increased administrative bloat.
Questionable validity: Derek Sayer has cogently argued the case that the peer review element of REF is open to bias from subjective, idiosyncratic and inexpert opinions. It is also unaccountable in the sense that ratings made of individual outputs are destroyed. One can see why this is done: otherwise HEFCE could be inundated with requests for information and appeals. But if the raw data is not available, then this does not inspire confidence in the process, especially when there are widespread accusations of games-playing and grade inflation.
Concentration of funding in a few institutions: We are told that the goal is to award quality-related funding, but as currently implemented, this leads inevitably to a process whereby the rich get richer and the poor get poorer, with the bulk of funds concentrated in a few institutions. We suspect that the intention of including ‘impact’ in the REF was to reduce the disparity between the Golden Triangle (Oxford, Cambridge and London) and other institutions which might be doing excellent applied work, but if anything the opposite has happened. We do not yet know what the funding formula will be, but if it is, as widely predicted, heavily biased in favour of 4* research, we could move to a situation where only the large institutions will survive to be research-active. There has been no discussion of whether such an outcome is desirable.
Shifting the balance of funding across disciplines: A recent article in the Times Higher Education noted another issue: the tendency for those in the Sciences to obtain higher scores on the REF than those in the Humanities. Quotes from HEFCE officials in the article offered no reassurance to those who were concerned this could mean a cut in funding for humanities. Such a move, if accompanied by changes to student funding to advantage those in STEM subjects, could dramatically reduce the strength of Humanities in the UK.
Unaccountable flexibility in the funding formula: There are many different ways of achieving ratings: For instance, whether or not the ratings include ‘intensity’ (number of returnable staff who were entered), can dramatically alter rank orderings. Or we could look at a suggestion by Graeme Wise that a ‘bang for your buck’ metric that assessed outputs in relation to grant income would be most appropriate. Even more radical is a suggestion by Dermot Lynott, that we should be giving the most rewards to those whose outputs were impressive in relation to their scores on environment. Needless to say, a very different profile of winners and losers emerged from such an analysis. It will ultimately be a political decision as to how to translate the REF scores into funding. We have to ask whether it worth going through this entire long-winded exercise if, by simply changing the funding formula, one can make a dramatic difference to an institution’s funding to achieve a politically expedient outcome.
Damage inflicted on careers and morale: The criteria for entering staff for the REF could appear quite cavalier; for instance, the requirement for a numerical ratio between number of staff entered and number of case studies meant that some departments with few case studies were unable to enter all plausible staff. Derek Sayer has described instances of decisions to enter staff being made on what appeared to be flimsy evidence based on ad hoc internal evaluations. Yet being identified as ‘non-REFable’ is not only damaging to morale, but could have real impacts on prospects for promotion and job security.
Focus on competition rather than collaboration: The REF exercise creates rank orderings, and everyone is desperately trying to nudge ahead of the others. In fact, there are so many different ways of doing the ranking, that almost everyone can be satisfied that they are ‘among the top’ on some index or other. Those who crowed loudest about their success tried to temper this by arguing that they were celebrating a broader ‘British’ success, but this seems perverse. Why should concentrating ever more of the excellent research in an ever small number of institutions be regarded as a national success story? It is, of course, widely believed that competition is a force for good, stimulating people to do better than they otherwise might. However, many in academia take the view that they don’t need to be incentivised by competition to work hard: they are in the job for the love of it, and would like their efforts to be appreciated for what they are, not because they help push the institution up a league table. Competition can also damage relationships between different departments within a University, if there are disparities in REF performance that lead to bickering about who is more valuable.
Perverse incentives that damage research: These may play out differently in different subject areas, but overall, many academics feel that they are not able to do the research they want in the way they would like. In science, there are intense pressures to publish in high-impact journals and bring in grant income. Some institutions are notorious for threatening redundancy to scientific staff who do not meet some agreed quota of research income, creating incentives to do ever more expensive research (see for instance cases at Kings College London, Imperial College, and Warwick University Medical School). In humanities, the pressure to produce a steady stream of research articles and monographs has led to the sense of an enforced move to over-specialisation, with academics increasingly incapable of explaining or demonstrating the broader significance of their work. Younger generations of academics find that their direction of research is being wholly driven by ‘REF-ability’, and that journal publications automatically trump those in volumes of collected essays, even when the latter may be more important for the field.
Perverse incentives on hiring practices: This is another consequence of the intensely competitive culture that is induced by the REF. We have, particularly around the time of the REF, a market in research ‘super-stars’, who can attract impressive transfer fees. People from other institutions who are employed at only 20 per cent part-time suddenly appear on the books, boosting the institution’s return on funding and outputs.
Devaluation of non-research activity: Academics whose positions require them to teach and do research have felt pressured to focus principally on research, and teaching has consequently been devalued. It is sometimes suggested that the Impact agenda of the REF also encourages academics to spend time on public engagement, but in fact it has the opposite effect. Public engagement does not count as ‘impact’ for REF purposes: to demonstrate REF impact, one must provide concrete documentation of how a specific piece of research has influenced non-academic users, such as policy-makers, health professionals, museums, etc.
How have we come to this?
Given that many of these points were made in the run-up to the announcement of REF results, we have to ask how it is that we find ourselves trapped in such an undesirable system. It is noteworthy that the REF is popular with many vice-chancellors and administrative teams. It makes it easier to manage staff, with objective criteria for hiring and firing, and provides league tables to measure progress by. For those already attached to the vision of a university as big business, it seems the natural next step to have objective rules for defining winners and losers so that one can directly measure an individual’s likely ability to bring money into the university without having to make difficult judgements about the intrinsic quality of their work.
It is a moot point whether the collapse of the circle of winners to Oxbridge and London was the result of deliberate planning, or an unintended consequence of how the system operates. Be this as it may, one concern is that this could provide further pressure for the British university system to reconfigure itself so that it can compete with the American private elite. There would be increasing reluctance to use general taxation to concentrate even more educational resources within close proximity to London, and instead we could see a shift to private funding, increasing the extent to which access to the best institutions is distributed according to wealth rather than ability. In a few short years we could see the destruction of a genuinely national system of higher education, publicly funded because it is designed to serve everyone, to a privately funded system that is world-class for the few who can afford to access it, but a disaster for the country as a whole. As in the US, educational opportunity would be concentrated overwhelmingly in places where it can be accessed primarily by the wealthy, privileged and well-connected.
At the time of the announcement of REF results, there was a sense that anyone who criticised the celebrations was either a bad loser, or – if they came from an institution who did well – a traitor for not celebrating British success. At CDBU we are proud of UK Universities and their research reputation, but our loyalty is to our discipline, our profession, our vocation and our sense of their place in the wider scheme of things: we fear that the assessment process embodied in REF will in the longer term damage these.
Where next?
It is, of course, all very well to criticise and paint visions of a dystopian future. If we wish to replace the current system, we must look at alternative ways forward. At a debate about the REF, organised by Sage Publishers on 8th December, David Willetts MP, who until July 2014 was Minister of State for Universities and Science, took the view that some of those who disapproved of REF were just dinosaurs who wanted to go back to the 1970s, when funds were allocated to institutions by a group of the great and the good making judgements over dinner at the Athenaeum. We would dispute that this is the only alternative to the current system. But Willetts was right on another count: he pointed out that the current REF system was not imposed by government, nor by HEFCE. Indeed, they had been actively pursuing the idea of using a simpler metrics-based system for the REF, but it was resoundingly rejected by the academic community. Government, according to Willetts, would listen to any reasonable proposal for a new system. Clearly, it is now up to the academics themselves to propose a viable alternative.
At the same meeting, David Sweeney, Director of HEFCE responsible for Education and Knowledge Exchange, implied that critics of REF just wanted to be handed public money without any accountability. He emphasised that the government and taxpayer put money into university research and had a right to know about the outcomes from the investment they had made. We totally agree. But we take issue with those who, like Mark Leach, Director and Editor-in-Chief of Wonke, think that the REF, for all its limitations, provides a good solution. Our position is that, for all the reasons given above, the REF is a seriously flawed system for deciding on disbursement of research funds, which in the long run will do the UK University system more harm than good.
What alternatives are there?
1. One possibility that has been discussed is to remove the QR component of funding altogether, and give all funds to the research councils. The problem with this solution is that it would mean the research councils would have to grow in size enormously, the load on reviewers, already seen as unsustainable by many would increase yet further, and pressures on academics to bring in research grants would be even more intense. It has also been argued that it would further increase disparities between the Golden Triangle (Oxbridge and London) and the rest, and would disfavour non-STEM disciplines.
2. HEFCE is has been looking at various publication-based metrics that might substitute for the REF, and plans to do some empirical studies comparing metric-based evaluation with REF results. However, metrics have been vigorously opposed by many in the academic community, especially in the humanities, where there is much worse agreement with expert opinion than in the sciences. We do at least now have hard data from the REF that can be considered when evaluating how metrics would perform, but there’s a real risk that introduction of any metric will further distort incentives, so that the measure becomes the goal.
3. At the Sage meeting, Derek Sayer put forward another interesting alternative, which was that funding should be based just on the ‘research environment’ component of the REF, which focuses more on inputs than outputs.
4. An even simpler option that would retain the dual support system but remove quality-related funding would involve disbursing funds purely on the basis of the number of active researchers in a department. This could be criticised for leading to a ‘prairie farming’ model whereby departments would band together to create enormous conglomerates that would benefit from economies of scale. One could, however, put a limit on the size of unit entered. At the Sage meeting, David Sweeney expressed himself as strongly opposed to this solution, even though in the last round it gave a funding result that was highly correlated with actual funding outcomes from RAE, in both science and humanities. He is right in noting that, despite the high correlation, there would nevertheless undoubtedly be winners and losers who would suffer substantial gains and losses in real terms, relative to the RAE result, but the question is whether this would involve unfairness. It’s hard to say, given that we have no gold standard. You could of course further argue that you have to have some measure of quality to incentivise people to do better, as well as a need to guard against freeloaders, like Laurie Taylor’s Dr Piercemuller. Yet, as we have noted, for most academics, exhortations from managers to ‘do better’ don’t achieve much and may indeed be counterproductive. We need to be accountable for the public money spent on university research, but subjecting every apple in the barrel to an exhaustive x-ray examination may not be the best way to identify the rotten ones.
We do not have a single solution, but we think that academics must take control of this process and not leave it in the hands of HEFCE and the government. This article about Dutch Universities Netherlands has strong parallels with the UK situation: the author argues academics have been too passive in accepting an emphasis on competition, and the use of evaluation systems as means of control. There is unlikely to be an ideal solution and we may have to live with the ‘least bad’ option. But let us consider all options in terms of how far they are likely to exacerbate or resolve the problems outlined above, or we may find ourselves saddled with something even worse than REF2014.
We hope this article opens up the discussion on this topic. Please do add your comments. We are moderating comments to exclude spam, but non-anonymous, on-topic comments will be published unless they contravene the usual rules.
Finally, if you agree with the broad concerns expressed here, please do consider joining the CDBU to help us campaign more effectively for change.
Really good discussion points.
Having worked in both modern and pre-1992 university environments (and seen both excellent and poorer research activity in both – big thumbs up to Strathclyde here), I have a lot of sympathy for the argument that we should incorporate “bang for buck” or “success in less traditionally research-intensive environment” considerations.
It seems to me to be madness to incentive needless research spend.
Professor Dan Cohn-Sherbok sent this comment by email:
The REF encourages universities to make exaggerated claims about the excellence of their research on websites and through their publications. Even institutions with very low rankings isolate the few submissions that gained a high score and boast that their research is world class
The whole thing seems based on the idea that there isn’t enough money for everyone to have what they would ideally want. If that is so, the easier solution would be to reduce the number of people, so that everyone could, in fact, have what they seemed to need and simply accept as a fact of life, shrugging your shoulders, that some people will do better work than others. That would appear organisationally and financially superior to all the time, work and money that seems to go into the REF.
Patrick Ainley, University of Greenwich has added this comment via email:
As Colin Waugh, editor of Post-16 Educator, suggests,‘nominal HE is being differentiated (for example, by the concentration of research funding) into a posh bit that workers pay for from their taxes but from which they are largely excluded as students, and another bit which is increasingly vocationalised and privatized and, also, for those reasons, pushed into what is in effect a single FE (or nominally FHE) sector.’
I agree that cheaper, easier and more valid – or just as valid – alternatives are needed. Years and years ago, before the first RAE, the academic GPs in the UK sat down together to rank all their departments by consensus, put the results in a sealed box, and then compared them later to the laborious RAE results. The correlation was of course very good! This was published in the Lancet. Such informality would of course be frowned on these days, but if something works and can be done in an afternoon, why not?
We tracked down the article you mentioned. It is:
Howie, J. G. R., & Stott, N. C. H. (1993). The Universities Research Assessment Exercise 1992 – A second opinion. Lancet, 342(8872), 665-666. doi: 10.1016/0140-6736(93)91765-e
Rather depressingly, we also found other Lancet commentaries on the last Research Assessment Exercise, making some of the points that CDBU raised here. At the time, it was anticipated the whole thing might be abolished, but instead it has become even more complex.
Anonymous. (2007). What next for the UK’s research assessment exercise? Lancet, 370(9601), 1738-1738.
Banatvala, J., Bell, P., & Symonds, M. (2005). The Research Assessment Exercise is bad for UK medicine. Lancet, 365(9458), 458-460.
The RAE and the REF do no harm if they mainly focus on finding the best research and publications and keep the quantity of papers submitted from each individual low. What is wrong is for research assessments now or in the future to focus too much on grants won. Not all research of excellent( international? galactic?) quality needs huge funds even in some areas of the sciences. These areas are surely the most “cost efficient” and their researchers should be congratulated by Vice Chancellors for their frugality rather than be threatened by “more grants or else letters”. These threats are particularly ironic as universities constantly whine that many grants don’t sufficiently cover the overheads the work entails. Universities should be enabled by government/Hefce grant to support low cost high quality research in novel and worthwhile areas which are not currently fashionable and attractive to grant funders.
Vice Chancellors need to be reminded of their responsibility to deepen, spread and nourish Civilization not just to construct more and more prestigious buildings here and in China.
Thanks.
Your suggestion of valuing cost-efficiency is one advocated by John Ioannidis in this recent article:
Ioannidis, J. P. A. (2014). How to Make More Published Research True. PLos Medicine, 11(10), e1001747. doi: 10.1371/journal.pmed.1001747
He’s an influential man and there is evidence that some funders are starting to get interested in what he’s saying, as they recognise the waste in the current system. So maybe some hope on the horizon.
Really good discussion points. I also agree that cheaper, easier and more valid or just as valid alternatives are needed.
These points are important and ought to be considered carefully: it would be wonderful if the REF could be replaced by something better. If such large-scale replacement doesn’t happen, however, I hope CDBU members will consider supporting my proposal to tweak the current REF format to give universities an incentive to make long-term rather than short-term REF hires. This would be only a small step in the right direction, but at least it would be a positive step (because some academics would end up with longer-term contracts and because the REF results would more accurately reflect the makeup of departments over the 5 years in which funding is given based on the REF scores) and one easily achievable. You can sign a petition in favour of the proposal at http://www.thepetitionsite.com/894/968/678/make-the-next-ref-an-incentive-for-long-term-employment-not-short-term-exploitation/, and find more about how it works at https://hortensii.wordpress.com/2014/12/18/our-petition-to-hefce/. Thank you!
Thanks for this suggestion. The impact on REF gaming on early career researchers is an important concern.
The impression in the past is that HEFCE has tried to anticipate gaming and take measures against it, but they clearly have not dealt with the problem of institutions buying in talent on short-term contracts.
As noted in our document, the other way this works is the parachuting in of superstars, often from overseas, on 20% contracts. Worth looking at Chris Bertram’s piece on this (see link in section on perverse hiring practices)
I have seen no comment on the skewing effects –on both the future health of research and on the league tables–of the automatic exclusion from REF of those on teaching only contracts:
1) Junior staff on such contracts would not be employed unless they were research active, but because their jobs do not require them to do research they are frequently given heavier teaching loads. This means that those with research aspirations are likely to be exploited, researching in their ‘spare’ time, or simply unable to do the research that would enable them to get a balanced academic job in future. This is particularly problematic for young women who may also want to start a family.
2) League tables based on so-called ‘intensity’ ignore the exclusion from REF of teaching-only staff since by the terms of their contracts such staff are not eligible. ‘Intensity’ league tables such as those published in THES last week are therefore not in fact measuring intensity. A department might be be racking up a 100% return of ‘eligible’ academic staff while employing hordes of struggling researchers as teaching staff, none of whom was returned.
REF2014 rules on the return of staff therefore militated against good and fair employment practices and made a nonsense of initiatives like Athena Swan and the protocol for protecting and developing EC staff. In future, if the aim really is to get a snapshot of the overall research quality of each department and not simply to restrict research resources to ever smaller numbers of institutions, ALL staff with teaching or research responsibilities should be included in the exercise and there should be no possibility (let alone expectation, and even encouragement as there was this time from HEFCE) for non-return.
Thanks. This reiterates the concerns in Eleanor’s comment above.
Whatever system we end up with in 2020, this is something that needs attention and might be more straightforward to fix than other issues.
Regarding the alternatives, I do not think the following is a strong objection:
“One possibility that has been discussed is to remove the QR component of funding altogether, and give all funds to the research councils. The problem with this solution is that it would mean the research councils would have to grow in size enormously…”
The success rate of applications is currently so low, and that is a real loss of time and effort (and is perhaps the most dispiriting thing about research, especially for your researchers). If we could find a way to make success rates double or triple the size, that would be a real improvement in efficiency. It would mean an increase in size of the councils, but given that they are already dealing with so many unsuccessful applications, the increase might not be so great (just more happy outcomes). Making councils larger so that they can support more research would be compensated by the elimination of the extraordinary cost of REF.
Thanks. Giving all funding to research councils has been mooted before and others may agree that the benefits outweigh the disadvantages. We would all end up doing even more reviewing, but perhaps that’s a price worth paying.
It may be unpopular with VCs as it would make forward planning more difficult if they could not rely on a large lump sum coming in. It might be possible to deal with that with funds set aside for large infrastructure projects by disbursed by research councils.
In terms of unforeseen consequences, there might be a worry that pressure on academics to bring in grants, already a concern in some places, could get worse – though it might be counteracted by other measures such as restrictions on the number of grants held by any one individual, and , for those who have had funding, some consideration of track record in terms of the ‘bang for your buck’ metric that is favoured by Ioannidis (see reference in response to earlier comment).
REF skews the hiring process in other ways too. There is the imposition of an artificial cycle, where a new PhD’s chances of finding a first job differ depending on when they graduate: it is easier to persuade committees of your (potential) REFability when it’s early in the cycle than a year before the cut-off date where you must have the publications in hand, or else.
The importance of the REF in hiring also hinders international mobility. An early-career researcher from outside the UK (or, for that matter, a new UK PhD who hasn’t received sufficient guidance for whatever reason) cannot be expected to know all about this game and won’t be putting ‘here is how many REFable outputs I have’ in the cover letter. In effect, this introduces a selection for admin savviness in addition to academic excellence. (Of course, one might argue that it is not necessarily a bad thing, but just let’s not pretend this bias doesn’t exist too.)
Where next?
From the point of view of the humanities, the objection to metrics has been to citation counts: we generally cite research to show what’s wrong with it, so the most irritating work would score best. Conversely, one objection to the current system is that submitting work to be read discourages intellectual ambition. Some universities sent round instructions not to submit anything controversial for fear of antagonising the panel. (If it’s not controversial, it’s probably not worth writing.)
So what if we ruled out both citation counts and the reading of outputs? Assuming that we have to have league tables, I like the Sayer plan of assessing the environment. The submission would include current research activities, a five-year plan and data — on staff, staff development, research grants, PhDs awarded, etc. In addition, all publications in the period by all members of the department would be listed to show whether research was taking place and whether a reasonable proportion of it was thought good enough to be included in international publications. All members of the unit of assessment would be entered. Fly-ins would be made visible by a requirement to list contributions to the environment beyond delivering a paper in the last year of the assessment period.
On this basis, the illusion that the results define some ineffable quality would be seen for what it is, the process would be cheaper and less taxing, while departments divided by exclusions from the process could begin to heal their wounds. Best of all, academics could write the books and articles that they judge need to be written and at their own preferred pace. The REF would simply be a comparative snapshot of UoAs at a specific moment.
The REF is not only very expensive but also encourages the perverse incentives that have done much to corrupt science. It is highly unsatisfactory, so the real question becomes what should be done instead?
Transferring all the money to Research Councils won’t work. It would merely encourage the grossly bad behaviour that we’ve seen at Imperial, Warwick and Kings London all of whom have decreed that research must be as expensive as possible.
A complete re-thinking of tertiary education is needed,
It seems to be a good thing that such a large proportion of the population now get higher education. But the university system has failed to change to cope with the huge increase in the number of students. The system of highly specialist honours degrees might have been adequate when 5% of the population did degrees, but that system seems quite inappropriate when 50% are doing them. There are barely enough teachers who are qualified to teach specialist 3rd year or postgraduate courses. And many teachers must have suffered from (in my field) trying to teach the subtleties of the exponential probability density function to a huge third year class, most of whom have already decided that they want to be bankers or estate agents.
These considerations have driven me to conclude, somewhat reluctantly, that the whole system needs to be altered. Honours degrees were intended as a prelude to research and 50% of the population are not going to do that (fortunately for the economy). Vice-chancellors have insisted on imposing on large numbers of undergraduates, specialist degrees which are not what they want or need.
I believe that all first degrees should be ordinary degrees, and these should be less specialist than now. Some institutions would specialise in teaching such degrees, others would become predominantly postgraduate institutions, which would have the time. money and expertise to do proper advanced teaching, rather than the advanced Powerpoint courses that dominate what passes for Graduate Schools in the UK.
Such a system would be more egalitarian than now too. Everyone would start out with the same broad undergraduate education, and the decision about whether to specialise, and the area in which to specialise, would not have to be made before leaving school, as now, but would be postponed until two or three years later.
If this were done, most research would be done in the postgraduate institutions, Of course there is some good research in institutions that would become essentially teaching-only, so there would have to be chances for such people to move to postgraduate universities, and for some people to move in the other direction.
This procedure would, no doubt, result in a reduction in the huge number of papers that are published (but read by nobody). That is another advantage of my proposal. It’s commonly believed that there is a large amount of research that is either trivial or wrong. In biomedical research, it’s been estimated that 85% of resources are wasted (Macleod et al., 2014Lancet 383: 101–104. doi: 10.1016/s0140-6736(13)62329-6). It’s well-know that any paper, however bad, can be published in a peer-reviewed journal. Pubmed, amazingly, indexes something like 30 jouranls devoted to quack medicine, in which papers by quacks are peer-reviewed by quacks, and which are then solemnly counted by bean-counters as though they were real research. The pressure to publish when you have nothing to say is one of the perverse incentives of the metrics culture.
The reduction in the amount of rubbish that was funded and published would allow QR related cash to be split among fewer places, with higher standards, If those standards are maintained, it could simply be allocated on the basis of the number of people in a department. Dorothy Bishop has shown that even under the present system, the amount of QR money received is strongly correlated with the size of the department (using metrics produces only a tiny increase in the correlation coefficient). In other words, after all the huge amount of time, effort and money that’s been put into assessment of research, every submitted researcher ends up getting much the same amount of money. That system wouldn’t work at the moment, because, with their customary dishonesty, VCs would submit the departmental cat for a share of the cash. But it could work under a system such as I’ve described.
Lastly, everyone should read John Ioannidis’ paper, How to Make More Published Research True http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001747
It’s true that he doesn’t mention a system such as I propose. That’s because the USA has already got such a system. It seems to work quite well there.
Dear David, you write:
“Transferring all the money to Research Councils won’t work. It would merely encourage the grossly bad behaviour that we’ve seen at Imperial, Warwick and Kings London all of whom have decreed that research must be as expensive as possible”
But that is only because there are such large overheads. If the overheads were small, then good proposals would be funded, and there would no perverse incentives for expensive grants per se. I think getting rid of REF, putting the money into research councils, and reducing the overheads might be a big improvement. What do you think? I think as problematic as the REF is the all the time spent on grants with only a very small proportion being funded (with excellent grants rejected, with no invitation to resubmit).
Jeff
@Jeff Bowers
That’s a very good point. It would be a lot better than doing nothing. Nevertheless, I’d like to see the bigger changes that I outlined. They are not dissimilar to the principles on which the University of California was established in 1960, but in the UK we have stuck to the pre-war system which, I think, is not sensible in an age where so many people get higher education. See https://en.m.wikipedia.org/wiki/California_Master_Plan_for_Higher_Education
Aha, but there is a snag. If I recall rightly, HEFCE money was transferred to the Research Councils so that they could pay full economic costs (a somewhat flexible quantity).
Reducing overheads might indeed solve one problem, but who would make up the shortfall? Not QR because that’s just been given to the Research Councils.
I’ve written a bit more coherently about my suggestion for 2-stage degrees on my blog
http://www.dcscience.net/2015/02/01/what-to-do-about-research-assessment-the-ref-a-proposal-for-two-stage-university-education/
The next day, a version of this appeared in the Guardian (without the comments on its implications for the REF.
http://www.theguardian.com/higher-education-network/2015/feb/03/honours-degrees-arent-for-all-some-unis-should-only-teach-two-year-courses