+
  • HOME»
  • Frailties of University Ranking

Frailties of University Ranking

World Rankings of universities are fraught with frailties. They serve no real purpose but to trigger a mindless rat race. Their efficacy of signalling excellence is suspect as well.

Fads are not as much bothered about comfort or utility as they are about looks and appearances and this is no more limited to the fashion industry alone. 

Universities, probably the last bastion to buck the trend, have been pushed out of their ivory towers and have been made into the rat race for rankings. 

They are made to believe that their survival, growth, funding, faculty, and students, now depend upon their ability to make it into the league table at the highest echelons.  

Barring a few exceptions, most ranking agencies fan the fad, for it serves a substantial commercial interest. Most governments and higher education regulators lapped up the idea for it provides them with a tool to tame universities.  

Governmental and peer pressures notwithstanding, no more than 10% of higher education institutions presently participate in the global ranking. 

The most coveted and widely used Academic Ranking of the World Universities (ARWU), for example, had 2,500 universities participating in it. 

QS World Ranking of Universities too had 2,500 participants in 2023, whereas the Times Higher Education (THE) Ranking had only 1,600 in 2022. 

Taken as a whole, at least 28,000 of the 31,000 universities in the world do not participate in the global ranking process. They are happily reconciled that they lack the lustre to compete for this luxury that the ranking offers. 

They have also realised that even without ranking, just on the strength of their accreditation, they attract the students and faculty that they need. They are, thus, self-assured of their relevance to seeking a third-party certification for their relative prowess in performance. 

To them, the ranking serves no real purpose and it is just a euphemism for being elite. But the top 100 higher educational institutions could educate only a minuscule proportion of the higher education enrolment.  

Thus, no more than 3,000 higher educational institutions in the world are trapped in the ranking rat race. Ranking enthusiasts call them aspirational. 

They commit a lot of their time and resources to get included in the league table. Some unscrupulous ones may, in fact, fudge or manipulate their data to get a higher rank. 

Consultants with little exposure to teaching and research have sprung up in large numbers to help universities realise their aspirations. Many universities fall for them even though they might charge heftily.

They are engaged for their competence to steer steering the ranking process, but also for their contacts at appropriate levels and places. 

Commercial and unethical aspects aside, rankings suffer from some inherent deficiencies. The choice of the agency, for example, could make a university rank higher or lower. Such examples are aplenty. 

It is commonplace to find that universities ranked amongst the top 1000 by THE find no mention in QS or ARWU. There are also cases where a university is ranked by a world-ranking agency even though it could not by the national ranking.  

Significant variations in the ranks of universities by different ranking agencies are also very common. Indian Institute of Science (IISc), Bangalore, for example, is ranked 155th in the world by QS, whereas THE places it in the rank category of 301-350.

This could simply be because the three ranking agencies use different parameters, data sources and methodologies. However, since they claim to measure quality and excellence, such wide variations in their ranking comments adversely on the consistency, reliability and validity of the ranking. 

They obviously err in measuring what they claim to measure and report. The policy planners who promote the participation of universities in the ranking or base their critical decisions on the ranks conferred by different agencies, ought to be majorly concerned. Sadly, they seem oblivious to these frailties of higher education ranking.  

Interagency variations apart, exogenous and extraneous factors too seem to influence the ranking of a university. Quite often, the geographic location and its demography play a more critical role than the policies for promoting excellence. 

A university located in a populous country with a high unemployment rate like India has a huge disadvantage as compared to a country, say the United Arab Emirates (UAE), which is scantily populated, has a near-full employment rate and has a large expatriate population. 

Universities in UAE with much lower scores in parameters like citation per faculty and academic reputation were ranked much higher. This was simply because universities in UAE had scored a hundred out of a hundred in parameters like international faculty and international student ratios. This has obviously jacked up their overall scores and the consequent ranking. 

The Indian university, on the other hand,  with a significantly higher score on citation and academic reputation were ranked much lower, ostensibly because they had scored abysmally low on the count of international faculty and international student ratios. 

UAE Universities had an inherent edge due to the demographic characteristics of their country. With 89 per cent of its population comprising expatriates from 200 different nationalities, it is an extremely diverse place to work and live in. This diversity is, undoubtedly, reflected in their faculty and student population. 

International faculty and students are important indicators of diversity. They can, however, be swayed by the location and the context in which an institution exists and operate. These are but only a few examples that point out the flaws in the ranking system. 

Ranking costs money and time. It takes a toll on the precious faculty time, which could have otherwise been used for improving teaching and research. University administration spends innumerable hours to ranking related activities rather than focusing their attention on improving quality and promoting excellence. 

But isn’t it time to ask why being excellent, the best, very good and good, as indicated by the accreditation, is not sufficient? Why must we insist to know how an institution compares with the rest? Why should we go through the ranking rituals annually? Wouldn’t a quinquennial cycle serve the purpose better?

Furqan Qamar,  former Adviser for Education in the Planning Commission, is a Professor of Management at Jamia Millia Islamia, New Delhi. 

Tags:

Advertisement