Do we need a more pragmatic set of expectations for learning gain? Following the recent national learning gain conference, Professor Paul Ashwin sets out some possibilities.
There are great expectations for the work coming out of HEFCE’s funded pilot projects on learning gain. In some quarters there may be the rather extravagant hope they will lead to the development of metrics that can give simple and uncontroversial measurements of teaching quality, thus underpinning the judgements made through the Teaching Excellence and Student Outcomes Framework.
These expectations are based on two pervasive ‘measurement myths’ that surround learning gain and similar attempts to capture the quality of teaching and learning. They risk distorting the strength of what learning gain can offer. If unchallenged, such expectations could lead to disillusionment with the unrealised potential of learning gain.
In this piece, I examine these myths and offer a pragmatic sense of the important benefits that learning gain can offer. Underlying this argument is the understanding that any form of measurement is expensive and, if it does not lead to enhanced teaching and learning practices, then it is essentially a waste of money.
The first is the myth of big data. This is based on the belief that if we can get enough points of measurement across students’ experiences of higher education then we can combine these and come up with a precise measure of the quality of that experience. If we can combine measures of students’ skills, competencies, content knowledge and personal development then we will know how the quality of their learning experience.
The problem with this way of thinking is that it misunderstands the kinds of measures that we have at our disposal. It treats skills and competencies as if they are precise ways of measuring students’ gains from education. In reality they actually have the precision of a sledgehammer. For example, where do skills end and competencies begin? What is the difference between developing personally and gaining new knowledge?
These measures overlap in a myriad of ways because they offer different descriptions of the same educational processes, rather than separate aspects of an educational experience. The level of overlap means that they cannot be combined into a precise account of students’ experiences and any attempt to do so is doomed to failure.
A common response to the failures of the myth of big data is to shift to another, which can be called the myth of the silver bullet. This accepts that there is no meaningful way to combine different measures. Instead it looks for one single measure that is often related to a high quality outcome, though it does not capture everything about quality. There are two problems with this approach.
The first is that silver bullets ricochet against Goodhart’s Law, which states that once a measure becomes a performance indicator it ceases to be a good measure. Though a factor may have co-varied with quality in the past the moment it becomes a high stakes performance measure, institutions will seek to address it, often at the expense of quality more generally. The most likely outcome is that the relationship between the factor and overall quality is lost.
The classic example of this is how institutions have responded to the lower scores they received in relation to assessment and feedback on the National Student Survey. Their response has been to try to emphasise to students when they are receiving feedback. As you read this, there are no doubt universities printing t-shirts for their lecturers proclaiming ‘This is what feedback looks like!’ so that students are more aware of it taking place.
By itself, this will do nothing to improve quality. What universities should be doing is examining the reasons – such as poor curriculum design – that students do not use the feedback they are given to improve their future work. Unless students use feedback in this way, then it doesn’t really count as feedback. However, universities’ desire to ‘fix’ their scores on assessment and feedback mean that they lose sight of the wider educational problem and instead engage in increasingly strange practices.
The second problem is best illustrated by examining the silver bullet that is most frequently used when trying to compare the quality of higher education across disciplines: generic skills.
Let’s take communication skills as an example. Whilst we can look at communication in different situations and in different locations, and we can identify incidents of effective practice, it does not follow that if I am good at communicating in English, then I will also be good at communicating in Chinese. This is because a skilful act of communication requires linguistic knowledge, knowledge of the situation you are in, knowledge of the people you are communicating with. Without such knowledge, these skills are useless.
This example shows the central role that knowledge plays in shaping the meaning of what students have gained from their time at university. If we take the knowledge out, then we end up with empty accounts of students’ learning, which tell us little that is meaningful about the quality of their educational experiences.
Does this mean that the HEFCE-funded pilot projects on learning gain are a waste of time and effort? Certainly not. While findings from the projects may not result in the production of performance indicators, they will provide important evidence about the quality of students’ experiences. Though this evidence will inevitably be limited and partial, if it is examined in a thoughtful way it can help academics, students and institutions to find ways of improving the quality of those experiences.
At its best, focus on learning gain can be central to institutions’ strategies for teaching enhancement by providing rich evidence about what is working and what might be improved. One pragmatic possibility for the use of evidence on learning gain is its inclusion as part of the assessment of teaching excellence. This would involve examining the way that universities use learning gain evidence to enhance their teaching practices, rather than focusing on the measures of learning gain themselves.
This would need to be recognised as difficult, collective, intellectual work, which involves on-going dialogue and experimentation. The major advantage of such an approach is that it would directly lead to improvements in the quality of teaching and learning.
This blog is based on a keynote presentation, entitled ‘Using Learning Gain to Measure Teaching Quality: Extravagant Expectations and Pragmatic Possibilities’ given at the Learning Gain: Critical Explorations event, 28 September 2017.