Representative photo: uxindo/Unsplash
- It is impossible for reviewers to read through hundreds of applications when considering a scientist for a job or a grant.
- This is why Nobel laureate Randy Schekman said in 2020, “Every researcher should write an impact statement describing their key discoveries”.
- Research institutions and universities worldwide are slowly moving away from using only quantitative indicators of research performance to a qualitative approach.
- In India, national funding agencies and some premier institutions in India have had qualitative, peer review-based research assessment processes for some time now.
- But at the outset, screening for grants and positions is based wholly on quantitative metrics, and only shortlisted applications undergo qualitative peer-review.
- The way we evaluate the quality of our research shapes our national research culture.
Every research paper written by scientists describing a study or assessment stands on the shoulders of years of specialised study and practice. An untrained reader will be unable to understand the contents of these papers, and so will most scientists not working in that specialisation.
In 2020, Randy Schekman, who had won the Nobel Prize in medicine seven years prior, said “every researcher should write an impact statement describing their key discoveries”. He pointed out that it is impossible for reviewers to read through hundreds of applications when considering a scientist for a job or a grant, whereas such a statement could ease their task of poring over a mountainous pile of information.
Such suggestions stem from the need for better research assessment practices and in fact a better research culture. Peter Doherty, another Nobel laureate, has said that “impact factors are skewing science, causing journal editors to select papers based on what is going to be popular”.
The journal impact factor is the average number of citations a paper published in a particular journal receives in a year. Journals that have higher impact factors are considered to be more ‘prestigious’, and journals also market themselves that way. But on the downside, they tend to favour sensational papers in order to keep their score up – instead of publishing all legitimate scientific results.
In fact, in response to these issues, research institutions and universities worldwide are slowly moving away from using only quantitative indicators of research performance to a qualitative, peer-review-based approach.
What is the situation in India?
National funding agencies and some premier institutions in India have had qualitative, peer review-based research assessment processes for some time now. But at the outset, screening for grants and positions is based wholly on quantitative metrics, and only shortlisted applications undergo qualitative peer-review.
But one question still remains: Has this assessment system encouraged more socially impactful research, or is our national innovation system on par with some of the most innovative nations, or have we become more accountable about our sustainable development goals (SDGs) or interrelated national priorities?
It appears that the rub lies here.
First, the context. India spends about 0.7% of its GDP on R&D every year, while the gross expenditure on R&D (GERD) has been consistently increasing. But unlike in developed economies, the public sector’s contribution to GERD is a high 60+%.
In absolute numbers, the spending was estimated to be Rs 1.23 lakh crore in 2018-2019. Among other major science funding agencies, five major scientific agencies –
- Defence R&D Organisation (31.6%),
- Department of Space (19%),
- Indian Council of Agricultural Research (ICAR, 11.1%),
- Department of Atomic Energy (10.8%), and
- Council of Scientific & Industrial Research (9.5%)
– accounted for 82.2% of the total R&D expenditure by the Union government in 2017-2018.
To compare, OECD countries spend around 2.47% of their GDP on R&D activities, with Israel (4.93%) and Korea (4.64%) leading at the higher end – with a significant contribution from the private sector. While R&D spending, especially encouraging private sector contribution, has a lot to do with financial support and incentives, tax laws, and other related policies, the kind of research being funded and encouraged also has to do with the research culture and assessment practices.
And when it comes to culture and assessment, the question that invariably comes up is: How does the way we evaluate the quality of our research shape the national research culture?
According to Goodhart’s law, when a metric becomes a target, it ceases to be a good metric. This is because actors will start adjusting their processes to maintain a score – the way many journals do vis-à-vis the impact factor. But at the same time, several discussions, interviews and workshops involving early- and mid-career researchers have made the same point: there is hardly a better measure to understand or assess the impact of science than publication and/or patent count.
Ergo, scientists who publish more papers are considered to be more successful. But this strategy keeps the door open for them to publish low-quality papers just to increase the publication count. Similarly, outreach is counted only in numbers (number of workshops, e.g.), and only in institutions where it is already mandated (e.g. at ICAR).
Contributions to SDGs are yet to find their way into our metrics – even as grant approvals and allocations have been preferring ‘national priorities’. Mentorship, active engagement with communities or any other engagement beyond labs that result in innovation and accelerate the impact of science in society are entirely missing.
But arguably the most important thing our evaluation system is missing is introspection.
Countries around the world have been rethinking their assessment practices to include research impact. Australia’s political commitments to backing innovation in 2001 resulted in a working group in 2006 that “recommended assessment should rely on evidence-based impact statements containing both qualitative and quantitative information … instead of an indicator approach”.
Followed by a call for a more comprehensive evaluation system by President Xi Jinping in 2016, China unveiled two policies from the Ministry of Science and Technology and the Ministry of Education in 2020, which mirrored initiatives like the Leiden manifesto – a set of principles “to guide research evaluation” – and the EU policy for ‘Responsible Research and Innovation’. These policies recommended a strong focus on local impact instead of indicator-based measures.
Research communities in the UK have also been advocating for research evaluation reform since 1993. In 2014, the Research Excellence Framework, one of the world’s most comprehensive research assessment frameworks, was created. It measures impact using case studies submitted by researchers, each describing their work and its economic, social and policy impact.
The 2014 exercise cost GBP 246 million[footnote]Rs 2,344 crore[/footnote] to execute. The 2021 exercise considered 185,000 pieces of research from more than 76,000 researchers.
India doesn’t undertake such an exercise. To be clear, such exercises have limitations (expensive and bureaucratic) that need to be fixed, but they are required to make sure we properly justify our substantial and open-ended R&D expenditure. And given the complex nature of global and local challenges that these studies are funded to address, there is also a need to assess their efficiency.
Ultimately, such exercises are essential to balance the continued drive for excellence with a healthy and rewarding research culture. We hope that India’s science leaders recognise the need to reassess the assessment system and fund research that matters.
Suchiradipta Bhattacharjee is a DST-STI Policy Fellow at the DST-Centre for Policy Research, IIT Delhi. Moumita Koley is a DST-STI Policy Fellow at the DST-Centre for Policy Research, Indian Institute of Science, Bengaluru.
They are working on a project, funded by DORA, entitled ‘Exploring the Current Practices in Research Assessment within Indian Academia’.