The Haldane principle: remembering Fisher and getting that definition right

Opinion piece by G. R. Evans

It is very welcome news that the Government has decided to include a definition of the Haldane Principle on the face of the Bill. Jo Johnson made a special point of this in his speech to Universities UK on 24 February.  An accompanying document was published jointly by both the Departments of State that will in future be responsible for higher education. It proudly states that:

the amendment that we have tabled will, for the first time in history, enshrine the Haldane Principle in law.

This document did not, however, give more details. The actual Amendment of Clause 99 proposing and containing the definition is to be found in yet another document:

Page 64, line 10, at end insert -

The ‘Haldane principle’ is the principle that decisions on individual research proposals are best taken following an evaluation of the quality and likely impact of the proposals (such as a peer review process).”

Note that this definition does not stipulate an exercise of academic judgement, merely an ‘evaluation’ including ‘likely impact’ of research to be funded. Furthermore, the definition does not mention that infrastucture funding will come from Research England. Rather, an earlier statement merely stipulates the Councils (a sub-set of UKRI) will be responsible for the disbursement of project funding:

Page 64, line 7, at end insert -

“the Haldane principle, where the grant or direction mentioned in subsection (1) is in respect of functions exercisable by one or more of the Councils mentioned in section 91(1) pursuant to arrangements under that section,”

This took me back to the question what Haldane actually called for and the context in which he did so. His thoughts on higher education matters are chiefly to be found in some collected writings put together in a period when he was actively involved in fostering the development of the new ‘redbrick’ universities. He developed a special enthusiasm for technical education but essentially he was interested in the work of a university as a whole, not merely its research.

He recognised that if higher education was going to expand successfully something would have to be done about the funding that would be needed:

‘the truth is that work of this kind must be more largely assisted and fostered by the State than is the tradition of today if it is to succeed’

(Education and Empire: Addresses on certain topics of the day (London, 1902), p.38).

The new universities began to accept state funding but it was not at first expected that Oxford or Cambridge would need to apply. The First World War upset many expectations.

A decisive correspondence followed between November 1918 and May 1919, between the then President of the Board of Education, H. A. L. Fisher, and the Vice-Chancellor of the University of Oxford. This was published in full in May in the Oxford University Gazette, under the heading Applications for Government Grants (Oxford University Gazette, 49 (1918-9), p.471-8).

A deputation from the universities ‘asking for larger subsidies from the State’ met Fisher on 23 November. Oxford and Cambridge consulted one another and agreed that it would be wise to join in, but without committing themselves. Oxford was understandably nervous about accepting state funding because of the likelihood that it would bring State control.

But the Oxford scientists, scenting money, put in their own bids for specific sums for particular purposes. The heads of departments of the University Museum wrote on 3 March, 1919 with a list of such ‘needs’, identifying sums for capital outlay and salaries and pensions for Heads of Department and scholarships for what would now be called STEM subjects.

It was in this context that Fisher seems to have made his far-reaching policy decision and stated the ‘Fisher Principle’, that the state would not interfere in the allocation of funds within universities. It would not decide directly whether to fund, say, science at Oxford, or History at Manchester. It would give funding in the form of ‘Block Grants’ and allow the universities themselves to decide how to use the money.

He wrote to the Oxford Vice-Chancellor on 16 April:

‘Henceforth…each University which receives aid from the State will receive it in the form of a single inclusive grant, for the expenditure of which the University, as distinguished from any particular Department, will be responsible. Both the Government and, I think, the great majority of the Universities are convinced that such an arrangement is favourable not only to the preservation of University autonomy but also to the efficient administration of the University funds.’

The University’s Council (then the Hebdomadal Council, meeting weekly in term-time) requested an interview with Fisher and on May 15 a deputation of five, led by the Vice-Chancellor, had a meeting with him. The Memorandum of the Interview ‘kindly furnished by Mr. Fisher’s Secretary’ is also published in the Gazette. It repeated the policy principle arrived at in November, that ‘the English Universities in receipt of State-aid favoured …a general Block Grant’. It was explained that a Standing Committee was in process of ‘formation’ and that ‘henceforward, practically all the money for University Education would be borne on the Treasury Vote and would be allocated in annual Block Grants’ as the Standing Committee recommended.

This Standing Committee developed into the University Grants Committee, which was replaced a quarter of a century ago by first one then four Funding Councils. One of those, HEFCE, is now to be replaced as distributor of the remnant of that Block Grant mainly by Research England within UKRI, with only a vestige of the element previously used to fund teaching still remaining.

So there seem to be features of the Government Amendment to Clause 99 which would bear further thought if a definition of the ‘Haldane Principle’ is to enter statute.

The Haldane Principle arguably needs to be understood as it was developed in the ‘Fisher Principle’ and has been maintained for a century since. That placed a ‘buffer’ body between State and university and protected the freedom of the university to choose how to use its block grant on academic not political principles. That is not quite the thrust of the definition as it stands at present.

Nor did the ‘Fisher-Haldane Principle’ apply to the buffer body itself. The buffer stood between academic freedom and state control. It was not itself subject to that principle. It merely ensured that it was respected.

It is to be hoped that the legal draftsmen working on the Bill will try again. The version in the current Amendment, if it passes into law, will fail to protect the autonomy of the providers receiving funding from UKRI. Nor will it require funding decisions to be taken by academics or by autonomous institutions. The ‘peer review process’ is given as a mere example. There seems nothing to prevent a Minister or Secretary of State conducting ‘an evaluation of the quality and likely impact of the proposals’. Haldane and Fisher could both be turning in their graves.

 

 

 

Reflections on the REF and the need for change

Discussion piece by the CDBU Steering Group

possible picture for header

Results from the research excellence framework (REF) were publicly announced on 18th December, followed by a spate of triumphalist messages from University PR departments. Deeper analysis followed, both in the pages of the Times Higher Education, and in the media and on blogs.

CDBU has from the outset expressed concern about the REF, much of it consistent with the criticism that has been expressed elsewhere. In particular, we note:

Inefficiency: As Derek Sayer has noted, the REF has absorbed a great deal of time and money that might have been spent better elsewhere. The precise cost has yet to be reported, but it is likely to be greater than the £60m official figure, and that is not taking into account the cost in terms of the time of academic staff. Universities have taken on new staff to do the laborious work of compiling data and writing impact statements, but this has diverted funds from front-line academia and increased administrative bloat.

Questionable validity: Derek Sayer has cogently argued the case that the peer review element of REF is open to bias from subjective, idiosyncratic and inexpert opinions. It is also unaccountable in the sense that ratings made of individual outputs are destroyed. One can see why this is done: otherwise HEFCE could be inundated with requests for information and appeals. But if the raw data is not available, then this does not inspire confidence in the process, especially when there are widespread accusations of games-playing and grade inflation.

Concentration of funding in a few institutions: We are told that the goal is to award quality-related funding, but as currently implemented, this leads inevitably to a process whereby the rich get richer and the poor get poorer, with the bulk of funds concentrated in a few institutions. We suspect that the intention of including ‘impact’ in the REF was to reduce the disparity between the Golden Triangle (Oxford, Cambridge and London) and other institutions which might be doing excellent applied work, but if anything the opposite has happened. We do not yet know what the funding formula will be, but if it is, as widely predicted, heavily biased in favour of 4* research, we could move to a situation where only the large institutions will survive to be research-active. There has been no discussion of whether such an outcome is desirable.

Shifting the balance of funding across disciplines: A recent article in the Times Higher Education noted another issue: the tendency for those in the Sciences to obtain higher scores on the REF than those in the Humanities. Quotes from HEFCE officials in the article offered no reassurance to those who were concerned this could mean a cut in funding for humanities. Such a move, if accompanied by changes to student funding to advantage those in STEM subjects, could dramatically reduce the strength of Humanities in the UK.

Continue reading

Problems with Peer Review for the REF

Opinion Piece by Derek Sayer* 

At the behest of universities minister David Willetts, HEFCE established an Independent review of the Role of Metrics in Research Assessment in April 2014 chaired by James Wilsden. This followed consultations in 2008-9 that played a decisive role in persuading the government to back down on previous plans to replace the RAE with a metrics-based system of research assessment. Wilsden’s call for evidence, which was open from 1 May to 30 June 2014, received 153 responses ‘reflecting a high level of interest and engagement from across the sector’ (Letter to Rt. Hon. Greg Clark MP). Sixty-seven of these were from HEIs, 27 from learned societies and three from mission groups. As in 2008-9, the British academic establishment (including the Russell Group, RCUK, the Royal Society, the British Academy, and the Wellcome Trust) made its voice heard. Predictably, ’57 per cent of the responses expressed overall scepticism about the further introduction of metrics into research assessment,’ while ‘a common theme that emerged was that peer review should be retained as the primary mechanism for evaluating research quality. Both sceptical and supportive responses argued that metrics must not be seen as a substitute for peer review … which should continue to be the “gold standard” for research assessment’ (Wilsden review, Summary of responses to submitted to the call for evidence).

The stock arguments against the use of metrics in research assessment were widely reiterated: journal impact factors cannot be a proxy for quality because ‘high-quality’ journals may still publish poor-quality articles; using citations as a metric ignores negative citation and self-citation; in some humanities and social science disciplines it is more common to produce books than articles, which will significantly reduce their citation counts, and so on. Much of this criticism, I would argue, is a red herring. Most of these points could easily be addressed by anybody who seriously wished to consider how bibliometrics might sensibly inform a research assessment exercise rather than kill any such suggestion at birth (don’t use JIFs, exclude self-citations, use indices like Publish or Perish that include monographs as well as articles and control for disciplinary variations). What is remarkable, however, is that while these faults are often presented as sufficient reason to reject the use of metrics in research assessment out of hand, the virtues of ‘peer review’ are simply assumed by most contributors to this discussion rather than scrutinized or evidenced. This matters because whatever the merits of peer review in the abstract—and there is room for debate on what is by its very nature a subjective process—the evaluation procedures used in REF 2014 (and previous RAEs) not only fail to meet HEFCE’s own claims to provide ‘expert review of the outputs’ but fall far short of internationally accepted norms of peer review.

Continue reading