6 June 2018
by Paul Ashwin

TEF 2018: Can students trust the gold standard?

Improvements in the TEF awards from one year to another is not only remarkable, says Professor Paul Ashwin in Times Higher Education, it also calls into question the validity of this exercise to accurately measure teaching quality.

The results of the latest measurement of the quality of teaching, learning and student outcomes in mainly English institutions providing higher education were published today under the guise of TEF3 (the teaching excellence and student outcomes framework). While a range of institutions submitted to TEF3, of particular interest are the 20 universities that were assessed under TEF2 last year and reapplied this year. Out of these, 13 (65 per cent) had their grade lifted a category and seven stayed in the same category. It is worth noting that two of these stayed at gold and so could not have achieved a better outcome this time around.

While the Office for Students press release trumpets the outcomes as a clear sign of the rigour of the TEF and its firm establishment as a trusted measure of teaching excellence, these outcomes actually raise some difficult questions about how much prospective students can trust the outcomes of the TEF.

This is important because the stated primary purpose of the TEF is to provide students with better information about what and where to study. However, it is unclear how reliable this information is if two thirds of those who reapplied received a different outcome after only a year. It also raises questions about institutions holding their TEF award for three years under the current system, which become even more pressing given that the current consultation on the subject-based TEF proposes extending the holding of the award to five or six years.

In making this comment, it is important to be clear that a change in outcome involves a significant change in the overall judgement of the quality of a university’s teaching, learning and student outcomes. In the wording of the TEF awards, moving from silver to gold involves an institution moving from “delivering high-quality teaching, learning and outcomes for its students” to “delivering consistently outstanding teaching, learning and outcomes”. This implies moving from a situation in which most of a university’s degree programmes are of a high standard and lead to good outcomes for its students to a situation in which all that university’s degree programmes are outstanding and its students are achieving outstanding learning outcomes.

That six institutions were able to make this change within a year is remarkable and bordering on miraculous when one considers that they received their TEF2 outcome in June 2017 and had to reapply for TEF 3 by January 2018. Given that this rules out the possibility that these changes were due to changes in teaching and learning practices based on the last assessment exercise, this leaves three possible explanations for this change, all of which severely undermine the TEF’s primary aim to provide prospective students with a valid measure of teaching quality.

First, the institutions concerned may have gained a higher TEF award because they got better at playing the TEF game. They may have written better institutional submissions, which were more convincing to the panel. This would be deeply damaging for the TEF because it would suggest that gaining a gold award is more about game playing rather than the actual quality of teaching. There appears to be nothing useful in providing prospective students with information about how good different universities are at writing TEF submissions.

Second, the results in this round were based on a different weighting of metrics and a different way of using them than in 2017. It may be that these changes led to changes in outcome for these institutions. This would be problematic because it would call into question all the judgements made in the last round of the TEF and would mean that there is no comparability of TEF golds awarded in different years. This then would require prospective students to check the year of a TEF award in order to gain a sense of what that award might mean.

Third, the institutions may have had better metrics than in TEF2. The problem with this is that it would mean that the key to doing well with the TEF is to apply when your metrics look good rather than it providing a robust assessment of teaching quality in which excellence can be expected to be sustained over an extended period. This will become even more of a problem if TEF awards are held for longer, with institutions making careful judgements about the most prudent time to reapply for their TEF award.

It is important to be clear that none of this suggests that universities engaged in TEF3 in an inappropriate way or that the TEF assessors were not rigorous in their assessment of the evidence. However, all these explanations undermine the claim that the TEF process as a whole provides prospective students with valid information about where to study.

Chris Husbands, the chair of the TEF assessment panel, appears to recognise these problems by claiming that the “TEF is a point-in-time judgement”. However, if the nature of this judgement changes in two thirds of cases from one year to the next, the TEF provides nothing that can usefully inform student choice. This is because by the time a student is studying at the institution that they selected based on its TEF rating, the quality of teaching, as measured by the TEF, may have changed. Indeed, the results from TEF3 suggest that it may have already changed at the point that prospective students are using the TEF results to inform their judgements about where to study.