Representative: A health official draws a dose of Covishield in Colombo, Sri Lanka, January 29, 2021. Photo: Reuters/Dinuka Liyanawatte/File Photo
Mumbai: On June 15, All India Institute of Medical Sciences (AIIMS), Delhi, started inoculating children with Covaxin as part of Bharat Biotech’s paediatric clinical trial to evaluate the safety, reactogenicity and immunogenicity of Covaxin in 2- to 18-year-old children.
The trial plans to enrol around 525 children across hospitals. This is a very small cohort. For example, Moderna’s paediatric trial aims to recruit 7,050 children aged 6 months to 12 years. It had conducted another trial with 3,732 teenagers aged 12-18 years. Similarly, Pfizer is conducting a paediatric trial with 4,644 children aged 6 months to 12 years; and its trial with adolescents enrolled 2,260 participants aged 12-15 years.
Experts have raised concerns about Bharat Biotech’s trial over its small size, and because it doesn’t have a placebo arm.
And while the criticism is significant, it may not be limited to Bharat Biotech.
The Hyderabad-based pharmaceutical company’s small paediatric trial is one of many clinical studies conducted in India that have, or intend to have, a small number of participants.
Such studies are said to be underpowered. According to one 2007 paper, a study’s “power is the probability of … saying there is a difference when a difference actually exists. An underpowered study does not have a sufficiently large sample size to answer the research question of interest.”
It is possible to calculate the ideal sample size before the trial begins. This means while trial investigators can write off an inconclusive result as the product of the small sample size, they’re expected to have good reasons for why they didn’t include more participants.
However, India’s drug regulator, the Drug Controller General, has complicated this picture, especially in the last couple years. It has passively condoned underpowering by awarding the fruits of such trials with ’emergency use’ approvals. Itolizumab was okayed after a trial with 30 participants, favipiravir with 150 participants, Virafin with 290 participants (40 and 250 in phase 2 and 3 trials) and 2-deoxy-d-glucose with 330 participants (110 and 220 in phase 2 and 3 trials).
* * *
Randomised control trials are difficult to conduct – more so with a crumbling health infrastructure in the middle of an epidemic – but this isn’t an acceptable excuse to not have one.
Conducting multiple small trials with a few hundred participants is not the same as conducting one large trial with thousands. When the former are ‘summed’ up, their underpoweredness is magnified, rendering the total findings less than useful.
In India, researchers who plan to conduct clinical studies involving human participants need to register each study on the Clinical Trial Registry of India (CTRI) before it begins. Since the pandemic began, more than 1,300 such trials have been registered on CTRI.
Not all trials are completed; many never kick off and many others stop midway, for multiple reasons. This said, India’s COVID-19 research output has still been piddling – especially in quality, because most studies are underpowered and often methodologically flawed as well.
For example, there are at least 15 trials registered on CTRI involving hydroxychloroquine, the drug that India pushed in a big way in 2020. These trials together include thousands of patients – but they produced little usable evidence last year.
Instead, all the evidence we have on the efficacy of different drugs has come from large adaptive trials, especially the WHO’s SOLIDARITY, the UK’s RECOVERY and the US’s ACTT trials.
(Researchers sometimes unearth important patterns by studying multiple small trials together. However, this isn’t possible when the trials measure different parameters, thus arriving at incomparable outcomes.
For example, trials for hydroxychloroquine – at AIIMS Delhi, with 116 participants, and at AIIMS Raipur, with 50 – measured progression to severe disease and virological clearance at day six, with no overlap.)
No country for negative trials
Another problem is publication bias – when researchers don’t publish their results because the latter are deemed to be insignificant.
For example, Mumbai-based Wockhardt conducted a trial for convalescent plasma in May 2020. In a meeting with experts at the Central Drug Standards Control Organisation on February 11, 2021, Wockhardt submitted that its trial failed to show any benefit with plasma – but it hasn’t published a paper detailing the trial’s results. The PLATINA trial in Maharashtra met a similar fate.
Emails to researchers involved in both trials hadn’t elicited a response at the time of publishing this article.
“Not publishing the results of a trial is a breach of the ethical obligations that researchers have towards study participants, and is a grave injustice to our people,” Dr Aju Mathew, an epidemiologist and a cancer specialist in Kochi, told The Wire Science.
In a study published in December 2020, Dr Mathew and his peers found that only 55% of cancer trials registered in India were eventually published. They also reported that trials conducted by international pharmaceutical companies were more likely to be published than Indian ones.
Publication bias can have a serious impact on patient care. COVID-19 is a new disease: before drug trials got underway, healthcare workers drew on their experience and education to make informed guesses about treatments to administer. Negative results in this regard can help workers improve their protocols by subtracting ‘bad’ options.
Indeed, institutions around the world dropped drugs like hydroxychloroquine, lopinavir and ritonavir from their protocols thanks to negative results from the RECOVERY and SOLIDARITY trials.
Smaller trials in India have also been less accountable and have operated with little oversight.
The UK’s RECOVERY trial recruited around 40,000 participants at 180 sites. The University of Oxford, which is in charge, centrally determined the protocol to follow and the outcomes to watch out for, and used new results from their own study to update their trial1. Both positive and negative results were published.
No such effort has emerged out of India – for reasons including bureaucratic lethargy, little collaboration between institutions, lack of autonomy and of incentives, Dr Mathew said. Other experts also mentioned deficient institutional funding and subpar expertise.
“We are a nation of band-aid action. We don’t plan ahead of time.”
“We can’t wake up in the middle of the pandemic and expect world-class research,” said Dr Soumyadeep Bhaumik, co-head of the meta-research and evidence synthesis unit at the George Institute, New Delhi. “A research ecosystem develops after years of systematic investment in institutions, expertise and a regulatory framework.”
It was only on June 10, 2021, that the Indian Council for Medical Research issued a tender to develop a large, multicentre, adaptive trial platform like RECOVERY.
According to a researcher who worked on a prominent Centre-funded COVID-19 trial in 2020, a serious lack of expertise among participating clinicians also compounded the chronic under-investment. “It took a lot of hand-holding and training to get them to appropriately complete the trial,” the researcher said.
It can feel both good and bad to know that India is in fact capable of conducting well-designed trials. For example, the PLACID trial, an ICMR-funded enterprise, enrolled 464 patients (calculated to be sufficient power) from April 22 to July 14, 2020, to study the efficacy of convalescent plasma. The results were published as a preprint paper on September 10 and in BMJ a month later.
In another example, as of October 15, 2020, Indian doctors had enrolled 937 Indian participants at several hospitals in the country for the SOLIDARITY trial.
Even when large, centralised trials aren’t feasible, Dr Bhaumik said a set of ‘core outcome measures’ could ensure small trials measuring the same things also watch out for the same results.
Here, experts deliberate on and recommend some common patient-centric outcomes for clinicians to use when designing their trials. Adopting these outcomes wouldn’t render all trials the same.
“They could measure other outcomes as well – but using a core outcome set means the results of these small trials can be pooled … to inform policy and practice,” Dr Bhaumik said. “We need to start developing core outcome sets for diseases of significance to our country.”
In the time of COVID-19, this duty fell on the ICMR’s shoulders, but which is yet to issue any such directions to the many COVID-19 trials happening in the country.
In fact, in the loudest directive the organisation sent forth (and quickly withdrew), its chief Dr Balram Bhargava demanded in July 2020 that hospitals involved in Covaxin’s clinical trial complete it in just two months, or face the government’s music. The results of this trial are yet to be published.
That is, the trial is adaptive.↩