What is teaching intensity – and how do you measure it?

The government believes that university courses can be rated according to the level of teaching intensity they provide. Professor GR Evans detects a lack of joined-up thinking

“Prospective students deserve to know which courses deliver great teaching and great outcomes – and which ones are lagging behind,” said the minister for higher education launching a consultation on the new subject-level teaching excellence framework (TEF) on 12 March. He also announced a competition for designing apps, to enable students to find the answers quickly on their smartphones.

The subject-level TEF will be designed to rate the lifelong “value” to a future employee of choosing a particular course at a particular provider. The model is essentially one of efficient “delivery” by the provider. As planned it does not factor in assessment of the part the student plays in acquiring the learning.

Among the proposals in the government consultation document is the addition of a “supplementary” measure of “teaching intensity”, on the hypothesis that:

“…excellent teaching is likely to demand a sufficient level of teaching intensity in order to provide a high quality experience for students.

The idea that a student is entitled to a number of hours of actual teaching or feedback from academic staff in return for the fee paid was first floated in a series of Higher Education Policy Institute (HEPI) publications. It had the attractiveness of simplicity and it encouraged students to complain that they were getting too few “contact hours” a week for their high tuition fees.

In 2011 in an effort to clarify matters, the Quality Assurance Agency (QAA) published helpful guidance entitled Explaining contact hours. However, expressions of student disquiet about getting “value for money” when fees are so high have grown still louder since. In December 2017 the Education Select Committee held its first evidence session in a Value for money in higher education enquiry In January 2018 the Office for Students (OfS) commissioned “a major piece of research” on value for money. The theme, in the form of the call for “teaching intensity” has now been taken up in connection with planning for a subject-level TEF.

What responsibilities do students have?

This raises one of the central questions arising from the design of the TEF in general. How far in higher education should the student actively meet the “teacher” half way in “learning”, rather than merely receive the instruction delivered? Student contracts commonly list what the provider and the student are respectively expected to do. For example, that the student “will take responsibility for [his or her] own learning and development, working in partnership with staff to become a self-reliant, independent learner” is an expectation in Bristol University’s student agreement. But this reciprocal requirement of student participation does not seem to be measured in the planned subject-level TEF.

It seems that independent study may not in the end be included, because as the consultation suggests:

“…it does not actually measure what teaching a student is receiving (and hence does not measure value for money) and is more dependent on the student than on the provider. Furthermore, it is difficult to reliably collect data on students” independent study.”

Yet the detailed TEF Guidance for providers in its version released in January 2018 goes in some detail into the complexities of the ways student response to teaching may take place, including, for example, the problem of “asynchronous online teaching” were a student may visit the online teaching at any time and perhaps many times and the teacher may not be present at all. Is this a contact hour (or hours)? How is it to be measured in terms of “value for money”? How is its “intensity” to be quantified?

The lack of joined-up thinking with work in progress part 1: learning gain

The first and most basic test of teaching is likely to be whether students complete their courses and gain the relevant qualification. Non-completion has become a troubling feature of higher education statistics, notably in a series of Public Accounts Committee and National Audit Office reports on alternative providers. Nothing appears to be proposed in the TEF plans to test whether there is a correlation between the quality of teaching and whether or not students graduate.

Learning gain in English higher education, a recent progress report from Hefce, describes what has been achieved so far since the project was launched in 2015. It is recognised that this needs to be tied into the TEF, so Hefce suggests that a Learning Gain Toolkit could provide “tested methodologies for institutions to undertake their own learning gain measurements and to demonstrate the outcomes through assessments such as the TEF.” However the project includes plans for “the use of data not just for improvement, but also for student information and performance incentives, which are the focus of TEF”.

There is now a body of “analysis of the learning gain activity identified within TEF year 2 submissions”. Hefce notes that so far this yields “little evidence that could be used across a range of providers to demonstrate learning gain”.

“Learning gain” is not defined solely in terms of knowledge and skills acquired by the student through the teaching received. It has a broader range:

“…learning gain has been understood for the purpose of HEFCE”s work to relate to the changes in knowledge, skills, work-readiness and personal development during a student’s time in HE.”

“Content knowledgeis what “students traditionally acquire “through their classes and other study at university”. But “skills and competencies can be either discipline-specific or non-discipline-specific.” “Work-readiness relates to concepts of employability” rather than to the student’s capacity to leave a university ready to work for his or her living.

As to measuring the results, Hefce has been trying out a “National Mixed Methodology Learning Gain project (NMMLG) in which students at 10 selected institutions are completing a series of repeated online assessments throughout their course.”

Here there has been experimental testing to see whether students show they can do better after more time and teaching.

The “learning gain” project has taken seriously the role of students” engagement with their learning” and has also explored “the influence of students’ backgrounds on their learning outcomes”:

“This activity is particularly important in the context of the extension of access agreements to include successful participation in the duties of the Office for Students and the context-specific student outcome measures used in the TEF.”

The arrival of learning analytics in HE has not been uncontroversial. Tracking students’ activity for their own good clearly raises ethical questions. Hefce suggests that:

“…this is enabling institutions to draw on increasingly sophisticated student data (such as, for example, real-time data on engagement and granular information about learning outcomes), to inform learning and teaching maintain an up-to-date understanding of potential connections between learning analytics and learning gain, we are liaising with Jisc, which is particularly active in this area of work.”

The lack of joined-up thinking with work in progress part 2: teacher “qualification” in higher education

There is as yet no suggestion that teaching should be observed or particular methods stipulated in the subject-level TEF. The Technical document (Consultation) recognises that it is “the right of providers to decide how teaching should be carried out” and promises that government “is not beginning with a view on whether certain types of teaching methods are better than others.

On the other hand, considerable effort has been put into encouraging teachers in higher education to gain an appropriate qualification. The Institute for Teaching and Learning in Higher Education set up in the wake of the Dearing Report had a short career. It was replaced by the Higher Education Academy (HEA), which is now to be amalgamated with two other sector bodies into a new entity to be called AdvanceHE.

Its chief executive, Alison Johns, who comes to the post from a career at HEFCE and most recently the role of CEO to the Leadership Foundation for Higher Education, told Times Higher Education that she envisages its having a role complementary to that of OfS:

“Having a friendly, supportive organisation as a counterweight to the tougher OfS oversight will be crucial for the health of UK higher education, Ms Johns explained, as it ‘provides confidence to students, government, banks and all sorts of stakeholders that institutions can run themselves effectively’”.

She acknowledged that “the fate of the HEA’s work” is “likely to generate the most discussion”. It has just reached 100,000 teaching fellows:

“That is one hell of a community of professional practitioners and it demonstrates that individuals and universities care about teaching,” she said.

Nevertheless there is clearly uncertainty about the future of the HEA and therefore of the long-term value of an HEA “teaching fellowship”.

Don’t encourage students to rely on this dubious measure

So neither any basis for measuring the “intensity” of teaching nor any way of checking what the student does with provision of teaching on the course “bought”, by way of “learning”, seems to be being factored into the planning for the subject-level TEF. When it is recognised that with a casualised academic workforce one year’s rating may give no reliable indicator of what might be expected from the same course in successive years, subject-based TEF ratings seem likely to remain a measure of dubious value to the students who are being encouraged to rely on it in choosing a course.