Stanford University’s list of top scientists must not be misused to win perception battles, says JNU Professor

Binay Panda, professor, School of Biotechnology, JNU pointed out that such a ‘ranking doesn’t serve us well’ and citations are often ‘gamed’ for better rankings.

Binay Panda during one of his presentations (Image: YouTube/Nancy J Kelley & Associates, enhanced with AI)

Atul Krishna | September 25, 2024 | 05:54 PM IST

NEW DELHI : On September 16, researchers at Stanford University released the latest list of Top Two Percent Most Cited Researchers in the world. This is an annual list that ranks some of the top researchers in the world, largely based on the number of overall citations.

However, despite the popularity of this list and its widespread coverage, researchers have cautioned against taking these rankings at face value. There are also concerns that the citation metrics don't give enough importance to locally-relevant research as opposed to work that can be applied widely. Moreover, the citations themselves can be gamed as news articles and commentaries in reputed journals are also used in the rankings, which might skew the results.

Binay Panda, professor at the School of Biotechnology, Jawaharlal Nehru University, speaks to Careers360 on the limitations of these rankings, the perception factor and how these rankings can be used by institutions to advance a false narrative about their quality.

Edited excerpts below.

Q. What is your reservation about this ranking?

The people who have made this ranking are from Stanford University, but they have used data from Elsevier's Scopus database. The problem with the rankings is that it’s taken at face value.

For example, in some Indian airports, you see written in large letters that you can read from a distance, "the best airport in the world". But right below, in small, barely-visible fonts, will be "for annual traffic between X to Y millions". Similarly, you can get something good out of this database if you know how to read and interpret those small letters. The devil is in the details.

Suppose you look at the number of universities in India posting on social media or quoted in some newspaper articles on having X number of researchers in the ranking, without explaining the details. You might not have even heard of some of those universities, let alone the researchers working there. Institutions that are just five years old are there, institutions that I've never heard of are there, and at the same time, many well-known scientists working in established institutions are also there.

The issue is the complete misuse of the database by some of the institutions and scientists.

Also read At IIT Mandi, reincarnation, ‘out-of-body experiences’ are compulsory topics for BTech students

The most recent database has people at the extremes. For example, it has someone who has published two papers in 91 years; it also has someone who has published 4,084 papers in 32 years, which is about 128 papers on average, every year, for 32 years. Another example is that the database has people with research publications [over periods] ranging from two years to 187 years. Both of these are just an insane spread. When you see something like that, you know that there is something seriously wrong in those cases. And that is my point. People like me who would like to see the raw data – and I am sure all scientists would – can immediately make out that the way the database is used is plain wrong. That is my primary objection. Most media articles don't do extra work or talk to experts before publishing sensational stories.

Also, there is retraction. Although the current data looks at the number of papers that have been retracted and are reported, no one talks about that. How many news articles have you read or seen on Stanford's top 2% list talk about retractions? None, at least none in India, that I have read. Now, that is a problem. For example, some scientists at the top of the list have between 100 to nearly 210 retractions. Additionally, the ranking does not just look at the research papers but also commentaries, news articles, and everything else.

It is not just the Stanford ranking; for that matter, any ranking, particularly for a place like India, must not be used. It is because that ranking is mired with inconsistencies. The problem is not the database; it is the problem with people looking at it and how they use, misuse, and abuse it to create selective perceptions.

Q. Are there a lot of institutions relying on this data?

Look at the institutions that are advertising this, and look at the individuals who are posting and boasting about this.

Right now, we have a responsibility towards our future generation, institutions, and country to be factual and not boastful. We live in an era where we rely a lot on marketing, not substance. Solid work gets hidden behind this perception. So far, fortunately, Stanford's top 2% ranking has not been taken into consideration for research grants. But I don't know, maybe someone is contemplating doing exactly that.

Just because someone is not in the top 2% list doesn't mean he or she is a bad scientist. My point is that people have to take a call looking at the actual data, look at the work that has been done, and read their research articles.

Also read Sci-Hub & Open Sources: Faizuddin Ahmed joined Stanford’s list of top scientists while teaching in school

Q. Is the primary issue about quantity and perception?

As I said before, my main objection is that many use this to win a perception battle. The database is extensive and there are many different parameters. Any time someone writes about the top 2%, there are a lot of ifs and buts and one can slice and re-slice the data to present it the way one wants. It requires an expert curation and understanding to know the numbers and what they represent. I have a moralistic and a practical point where this ranking doesn't serve us well. You might have to play the perception battle sometimes, but right now, we need to do solid work in the country, and playing with the perception can wait.

Q. Is there a case where Indian researchers are overlooked in some metrics?

Scientists from the developing world tend to overuse and misuse the ranking more and there is a reason for that. What you are flagging is obviously an issue. This list checks what is in the database, computes some numbers based on specific parameters in that database, and puts the data out. Due to their perceived importance, some people game these citations for a better ranking. This happens all the time. We should not discuss ranking because it is not an important success parameter. This doesn't serve us well.

Q. Are there any other ways to game the list?

I just discussed self-citation but there are others. If you look at the parameters considered in the ranking criteria, you will know what I am talking about. However, let's be very clear. The Stanford top 2% list has nothing to do with that. As I said, it takes the numbers from the Scopus database, and whatever gaming there is has already happened before that.

Also read After NEP 2020, scholarship and research fellowship funds declined by over Rs 1,500 crore

Q. H-index, which is one of the metrics used, requires you to publish in reputed journals. Could a bias exist there against Indian authors?

Of course, there is bias. My point is that the bias predates the database that the Stanford top 2% list is talking about here. Whatever bias has happened has already happened, and it has carried onto the database. That is a bigger topic of bringing equality in science information. There are efforts being made to ensure that the data is more open and people get recognition, but we don't live in a perfect world.

Q. What do you think about the research funding in India?

We spend too little compared to developed countries in Europe, the USA and Japan. Even compared to other developing countries, like China, we spend very little on our R&D. China spends nearly 2.8% on R&D, and we only spend less than 0.7%. China's GDP is about six times ours. Therefore, in absolute terms, China spends 24 times more on R&D than India. China is now in a different league.

However, there are also limitations to China's research capabilities and how far they can reach. China's research is very top-down, which is not a good cultural fit for us. Also, it is not necessarily needed. India needs to follow the US model, which lets competition push quality naturally. With the limited amount of money and resources that we have for science, we need to focus on excellence. However, I think the government must spend much more on science and technology, no matter which path we take to boost research.

Follow us for the latest education news on colleges and universities, admission, courses, exams, research, education policies, study abroad and more..

To get in touch, write to us at news@careers360.com.