At CGHE’s launch, I spoke about the high-profile issue of university rankings, and to what extent current ranking systems are responding to changing trends in science and higher education, especially in relation to research.
The key question on the table: are rankings catching up, keeping up or falling behind in terms of capturing those trends? It depends. Progress has been made with regards to measuring research performance (and new indicators are on the way), but in other areas the prospects seem less bright.
Rankings are interesting and valuable tools for some purposes (such as marketing and promotion), but they are also crude reflections of reality. Reducing organisational complexities to a ‘number’ distorts and misrepresents the intricacies of a university’s performance and specialisation.
Despite major attempts to improve the scope and quality of ranking tools, many key features of university performance are ignored. Teaching quality and a university’s regional economic impact are just some examples.
21st century developments in science
High-quality databases, coupled with more than 25 years of bibliometric and scientometric work, have resulted in many measurable university performance indicators – for example, impact measures based on citation figures. However, a number of pervasive global trends in 21st century science need to be understood when asking the question: how can we improve the way we rank universities?
The shift towards application-oriented research
Results from a study of the contents of millions of research publications suggest that there has been a shift towards application-oriented research. Scientific research is increasingly focused on medicine and hospitals.
Over the past 10 years the medical and life sciences have become more important in global science, with a growing share of worldwide research articles referencing them. This is outpacing the growth of university-oriented research, which fails to show the same dramatic rise. Such a trend must be recognised by ranking systems.
The shift towards multi-disciplinarity
Our expanding global science system has created more diverse and complex disciplinary relationships. There are more opportunities than ever for cross-disciplinary work and cross-boundary cooperation. Inter-disciplinarity and multi-disciplinarity are on the rise. This is reflected in citation patterns, which reveal the dispersal of knowledge flows and scientific impact.
To draw a comparison, a research article published in 2000 was cited, on average, by publications from 3.6 different disciplines over the following three years. That number rose steadily over the next decade and in 2010 had reached 4.3.
It is clear that newly created scientific knowledge is applied across a widening range of research areas. Yet how can university ranking systems, in which disciplines are regarded as separate from one another, reflect this diversity?
The shift towards multiple affiliations
A similar ‘widening’ is apparent when you look at researchers’ affiliations. A growing number of publications involve authors with more than one affiliation. Indeed, between 2009 and 2015 the share of publications with at least one ‘multiple affiliate’ author rose rapidly from around a quarter to a third, from 24% to 33%.
Furthermore, a sizable share of those authors are medical researchers with an affiliation to a hospital or clinic outside the university itself.
Heading in the right direction?
For university ranking systems to improve it is obvious that we need to produce more, and better, databases, as well as metrics and performance indicators that are universally accepted as valid.
Ranking systems need to keep up with stakeholder expectations and the dynamics of change currently sweeping through global higher education systems. There is a limited supply of reliable data yet a growing demand for high-quality information.
Now is the time to develop university rankings into credible and sophisticated information tools that are sufficiently future-proof.
This blog was also published in University World News