Monday, March 9, 2026
Home technology Counting scientists’ productivity with numbers undermines science | Explained

Counting scientists’ productivity with numbers undermines science | Explained

0
53
Counting scientists’ productivity with numbers undermines science | Explained


Scientists at Stanford University not too long ago ranked the ‘top’ 2% of scientists in a wide range of fields. The rating contained an up-to-date record of probably the most extremely cited scientists in these disciplines. That is, the record consists of the highest 100,000 scientists primarily based on an combination of numerical indicators, equal on this case to these scientists who’ve authored papers whose quotation depend lies within the prime 2% in every subject.

The presence of Indian scientists on this record has garnered substantial public consideration. They have been accompanied by institutional press releases, information options, and award citations.

Given this fanfare, it’s vital that we perceive what the highest 2% rating system truly measures and the way properly the measure correlates with real-world scientific achievement.

A mix of numbers

The 2% rating system relies on standardised quotation metrics throughout all scientific disciplines. In the scientific analysis setting, a quotation is a reference to a bit of knowledge, usually an already printed entity like an article, guide, or a paper in a journal. For a scientist, being cited signifies that some scientific publication that they’ve authored has served as a reference, foundation or supply for some elements of subsequent analysis within the subject.

Using scientific publication knowledge from the Scopus database of printed papers, maintained by the writer Elsevier, the two% rating system makes use of a composite quotation index that’s primarily based on six quotation indicators. These are: (i) the overall variety of citations for a paper, (ii) the overall variety of quotation for a paper the place the scientist is a single writer (i.e. no co-authors), (iii) the overall variety of citations for papers the place the scientist is a single or the primary writer, (iv) the overall variety of citations for papers the place the scientist is a single, first or final writer, (v) the variety of papers for which the scientist has been cited a minimum of the similar variety of instances (h-index), and (vi) the variety of citations per writer for all of the authors of a paper.

By combining the values of those indicators, the two% rating system assesses scientists’ quotation affect in a single calendar 12 months in addition to all through their careers.

Change of character

This method, utilizing totally different scientometric indicators, many scientists have tried to numerically quantify their friends’ scientific achievement. For instance, the ‘AD scientific index’ measures the productivity coefficient of a scientist utilizing the h-index, the i-10 index (variety of publications with a minimum of 10 citations), and different numbers; the ‘h-frac index’ tracks the fractional allocation of citations amongst co-authors; and the ‘Author Contribution Scores’ computes a steady rating that displays a scientist’s contributions relative to these of different authors over time. There are many others.

Numerical measures have been initially developed to trace analysis productivity. But they’ve since backfired and at the moment more and more affect selections associated to hiring, funding, promotions, awards, recognitions, {and professional} development. This is alarming due to elementary considerations associated to utilizing rankings and metrics as an ideal measure of scientific productivity and the impact of quantitative indicators on tutorial apply at giant.

Numbers aren’t every part

First, rating techniques and indices rely nearly solely on quantitative knowledge derived from quotation profiles. They don’t – can’t – consider the standard or affect of some scientific work. For instance, a scientist with 3,500 citations in biology might have accrued half of them from ‘review’ articles, that are articles that survey different printed work as an alternative of reporting unique analysis. As a outcome, this individual could rank among the many prime 2% of scientists by citations. On the opposite hand, a scientist with 600 citations all from unique analysis can be out of the highest 2%.

For one other instance, a biotechnologist with 700 citations from 28 papers printed in 1971-2015 and a scientist with an identical variety of citations from 33 papers printed in 2004-2020 could each be within the prime 2%. The latter’s benefit is that the fast development of and digital entry to tutorial publishing has resulted in quotation inflation over time.

Hidden incentives

Second, quotation metrics don’t permit us to extrapolate between fields or account for particular facets of analysis in sub-fields. For instance, in microbiology, the organism one is learning determines the timeline of a research. So a scientist working with, say, a bacterial species that’s troublesome to develop (like that of tuberculosis) would look like much less productive than a scientist working with fast turnaround applied sciences like computational modelling.

Third, the overvaluation of the numbers of publications and citations, the place of authorship (single, first or final), and so on. breeds unethical scientific practices. That is, such a system incentivises scientists to inflate their quotation depend by citing themselves, paying others to quote their work, and competing for writer positions. Correcting these indices by accounting for shared authorship additionally devalues some elementary tenets of scientific work and undermines the concept that analysis could take time to have an effect.

‘Extracurricular’ pressures

Fourth, rankings and metrics prohibit the definition of a ‘top’ scientist to somebody who has merely been productive in analysis. This received’t account for scientists’ different obligations, resembling instructing, mentoring, group service, administration, outreach, and so on. In reality, quantifying scientific productivity primarily based solely on analysis citations might seem to penalise those that are engaged in some or all of those different actions that are equally vital for science to learn society.

Academic publishing homes and institutional rating techniques additionally reap the benefits of the affect that quantitative indicators have on scientists’ profession and recognition by utilizing it to stress scientists to ‘publish or perish’ and by selling ‘high impact-factor journals’ that supposedly obtain extra citations. Scientific institutes additionally resort to quantitative indicators when evaluating scientists for hikes and promotion.

But past the sophisticated, oft-flawed techniques developed to evaluate analysis productivity, the easiest way to judge scientists’ work stays comparatively easy: learn the science.

  • Scientists at Stanford University not too long ago ranked the ‘top’ 2% of scientists in a wide range of fields. The rating contained an up-to-date record of probably the most extremely cited scientists in these disciplines.
  • The 2% rating system relies on standardised quotation metrics throughout all scientific disciplines. In the scientific analysis setting, a quotation is a reference to a bit of knowledge, usually an already printed entity like an article, guide, or a paper in a journal.
  • Numerical measures have been initially developed to trace analysis productivity. But they’ve since backfired and at the moment more and more affect selections associated to hiring, funding, promotions, awards, recognitions, {and professional} development.

Karishma Kaushik is the Executive Director of IndiaBioscience.



Source hyperlink