Waiting for the THE world rankings



The world, having recovered from the shocks of the Shanghai, QS and RUR rankings, now waits for the THE world rankings, especially the research impact indicator measured by field normalised citations.

It might be helpful to show the top 5 universities for this criterion since 2010-11.

2010-11
1. Caltech
2. MIT
3. Princeton
4. Alexandria University
5. UC Santa Cruz

2011-12
1. Princeton
2. MIT
3. Caltech
4. UC Santa Barbara
5. Rice University

2012-13
1. Rice University
2. National Research Nuclear University MePhI
3. MIT
4. UC Santa Cruz
5. Princeton

2013-14
1. MIT
2. Tokyo Metropolitan University
3. Rice University
4. UC Santa Cruz
5. Caltech

2014-15
1. MIT
2. UC Santa Cruz
3. Tokyo Metropolitan University
4. Rice University
5. Caltech

2015-16
1. St George's, University of London
2. Stanford University
3. UC Santa Cruz
4  Caltech
5. Harvard

Notice that no university has been in the top five for citations in every year.

Last year THE introduced some changes to this indicator, one of which was to exclude papers with more than 1000 authors from the citation count. This, along with a dilution of the regional modification that gave a bonus to universities in low scoring countries, had a devastating effect on some universities in France, Korea, Japan, Morocco, Chile and Turkey.

The citations indicator has always been an embarrassment to THE, throwing up a number of improbable front runners aka previously undiscovered pockets of excellence. Last year they introduced some reforms but not enough. It would be a good idea for THE to get rid of the regional modification altogether, to introduce full scale fractional counting, to reduce the weighting assigned to citations, to exclude self-citations and secondary affiliations and to include more than one measure of research impact and research quality.

Excluding the papers, mainly in particle physics, with 1,000 plus "authors" meant avoiding the bizarre situation where a contributor to a single paper with 2,000 authors and 2,000 citations would get the same credit as 1,000 authors writing a thousand papers each of which had been cited twice.

But this measure also  meant that some of the most significant scientific activity of the century would not be counted in the rankings. The best solution would have been fractional counting, distributing the citations among all of the institutions or contributors, and in fact THE did this for their pilot African rankings at the University of Johannesburg.

Now, THE have announced a change for this year's rankings. According to their data chief Duncan Ross.

" Last year we excluded a small number of papers with more than 1,000 authors. I won’t rehearse the arguments for their exclusion here, but we said at the time that we would try to identify a way to re-include them that would prevent the distorting effect that they had on the overall metric for a few universities.


This year they are included – although they will be treated differently from other papers. Every university with researchers who author a kilo-author paper will receive at least 5 per cent credit for the paper – rising proportionally to the number of authors that the university has.
This is the first time that we have used a proportional measure in our citations score, and we will be monitoring it with interest.

We’re also pleased that this year the calculation of the Times Higher Education World University Rankings has been subject to independent audit by professional services firm PricewaterhouseCoopers (PwC). "
This could have perverse consequences. If an institution has one contributor to a 1,000 author paper with 2,000 citations then that author will get 2,000 citations for the university. But if there are 1001 authors then he or she would get only 50 citations.

It is possible that we will see a cluster of papers with 998, 999, 1000 authors as institutions remove their researchers from the author lists or project leaders start capping the number of contributors.

This could be a way  of finding out if research intensive universities really do care about the THE rankings.

Similarly, QS now excludes papers with more than ten contributing institutions. If researchers are concerned about the QS rankings they will ensure that the number of institutions does not go above ten. Let's see if we start getting large numbers of papers with ten institutions but none or few with 11, 12 13 etc.

I am wondering why THE would bother introducing this relatively small change. Wouldn't it make more sense to introduce a lot of small changes all at once and get the resulting volatility over and done with?

I wonder if this has something to do with the THE world academic summit being held at Berkeley on 26-28 September in cooperation with UC Berkeley. Last year Berkeley fell from 8th to 13th in the THE world rankings. Since it is a contributor to several multi-contributor papers it is possible that the partial re-inclusion of hyper-papers will help the university back into the top ten.



Unknown
Unknown

Previous
Next Post »