Now Reading
Don’t Say Science Is Self-Correcting – Two Studies Show It Isn’t

Don’t Say Science Is Self-Correcting – Two Studies Show It Isn’t

In 2001, a clinical trial report about an antidepressant called paroxetine suggested the drug was effective and that patients tolerated it well. However, a series of lawsuits and investigations followed when doctors began to suspect its efficacy.

In 2015, another study found that the original clinical trial had been misreported. Paroxetine was ineffective as well as harmful to millions of children who had been prescribed it. The pill increased their susceptibility to suicidal behaviour.

Clinical trials help determine which compounds are effective against a particular diseases and how well they work. Conducting these trials is a complicated process with its own set of problems. These issues can crop up at all stages of a clinical trial – asking the right research questions, conducting experiments and reporting the results.

One major problem with these reports is called outcome switching. Two new studies investigated the prevalence of this problem, and explored how trialists and editors are responding to demands for corrections.

When trialists set clinical trials up, they are required to state which numbers they will be measuring to track the drug’s efficacy. These measurements are called outcomes and include, for example, the blood pressure or the development of suicidal tendencies a year after treatment is started.

Also read: The Professor Who Had to Spend Half His Life to Make the Drug India Needs

The outcomes must be registered before the trial commences. This way, the trialists can’t claim success if they were gunning for positive outcome A but got positive outcome B instead.

In this context, outcome-switching is the act of switching from A to B to be able to claim a success when there is, in fact, a failure.

“Misreporting and bias in medical evidence massively reduces the quality of the evidence that doctors use to give their patients information,” Henry Drysdale, a researcher at the University of Oxford and an author of both studies, told The Wire. “We’ve known for a long time that outcome switching is a common and important source of bias in clinical trials.”

The paroxetine case is a good example of why switching outcomes is bad. The trial began with two primary outcomes and three secondary outcomes. When it finished, its researchers reported two additional outcomes to make the trial seem more successful than it actually was.

But despite the unfavourable precedent, outcome switching is common.

In 1996, a group of people with stakes in clinical trials – including doctors, journal editors and funders – published the Consolidated Standards of Reporting Trials (CONSORT) statement. Since then, the CONSORT guidelines have been revised a few times and endorsed by 585 scientific journals.

According to CONSORT, if outcomes reported at the end of a trial differ from the pre-registered outcomes, the report is required explain why.

But even irrespective of whether that happens, switching outcomes means “what doctors tell their patients about the treatments might be based on flawed evidence and … might just be wrong in fact,” Drysdale said.

In 2015, Drysdale and his colleagues at the University of Oxford set up the Centre for Evidence-Based Medicine Outcome Monitoring Project (COMPare). The team there scanned every clinical trial report published in a six-week window in the New England Journal of Medicine (NEJM), the Journal of the American Medical Association (JAMA), the British Medical Journal (BMJ), The Lancet and the Annals of Internal Medicine. There were 67 in all.

They compared the outcomes in each report with the pre-registered ones. They found that while some reported outcomes accurately, many others added new outcomes without saying as much, despite CONSORT’s requirements.

Also read: Priggish NEJM Editorial on Data-sharing Misses the Point it Almost Made

Next, the COMPare team wrote letters to the editors of the journals where these deceptive reports had been published, highlighting the inconsistencies. They were trying to see if scientific research – and science by extension – was self-correcting like it often claims to be.

The answer was ‘no’. The editors’ responses ranged from accepting the mistakes to ignoring them outright.

Of the 67 trials that were examined, only nine made the cut. Of the 58 misreported trials, over 300 pre-registered outcomes didn’t find mention in the final reports while another 300 were switched in for them.

And of the 58 letters sent to the editors, 32 were rejected for different reasons. The BMJ and Annals published all of those addressed to them. The Lancet accepted 80% of them. The NEJM and JAMA turned down every single letter.

According to JAMA, the letters did not include all the details it required to challenge the reports. When the researchers pointed out that JAMA’s word limit for the letter precluded that, they never heard back from the journal.

On the other hand, NEJM stated that the authors of reports it published were not required to abide by the CONSORT guidelines. However, NEJM itself endorses CONSORT.

Next, the COMPare team wrote to the authors of the outcome-switched trials to see what they had to say. Like the journal editors, the trialists offered up a number of reasons for why they had included previously unmentioned outcomes in their clinical trial reports.

The trialists were often unclear about what the CONSORT guidelines said about pre-specifying outcomes or even when they were considered prespecified. Often, they were also unaware that simply mentioning that a new outcome had been added to a report was sufficient for the sake of accuracy.

Finally, many didn’t seem to have understood how trial registries worked and insisted that unpublished outcomes could be published elsewhere at a later date.

“What is most alarming is that [the COMPare team] demonstrates how much resistance there is to correcting the record and how ignorant and/or biased trialists are,” John Ioannidis, a professor of medicine at the University of Stanford, wrote in an email to The Wire. He wasn’t associated with the study.

So it seems the scientific ideal of self-correction is hardly the norm.

David Moher, a senior scientist at the University of Ottawa, helped craft the CONSORT guidelines, and found the results “somewhat discouraging.” According to him, the COMPare results are almost like an audit of scientific journals.

He compared the process to troubles within the airline industry. Should airplanes fail, the industry and public are quick to raise alarms and bring ground vehicles – as with the Boeing 737MAX.

Journals, like any other product, need to be audited to ensure they are doing their jobs right. “We don’t generally see that done and I think that’s very, very unfortunate,” Moher told The Wire.

Also read: Scientific Fraudsters Frequently Slip Through the Cracks to Repeat the Offence

The COMPare findings highlight trialists’ lack of research integrity.

To Moher at least, trialists should take the basic step of ensuring they comply with CONSORT when reporting on a clinical trial. If they aren’t – as COMPare has demonstrated – it might be a good idea to train them further.

The clinical trial community will also need to introspect on whether any of their incentives encourages misreporting, more so since such reports have grave impacts on doctors and patients.

For example, scientists – including trialists – are under a lot of pressure to publish, often in ‘prestigious’ journals, to secure promotions. As a result, scientists often try to produce glamorous results that don’t reflect what actually went on in an experiment or trial or hack the trial itself to produce spectacular results.

Ioannidis himself has done a lot of work in this area. Perhaps most famously, he found in a groundbreaking 2005 study that – as its title went – “most published research findings are false”. He estimated as a result that 85% of “research resources” are “being wasted”.

“Clinical trials are what leads to medical interventions being licensed or not and being widely used or abandoned,” Ioannidis said. “Misleading results and inferences due to outcome switching can affect the health of billions of people. This is not a statistical curiosity issue. It is about life or death.”

The two studies (this and this) were published in the journal Trials on February 14, 2019.

Sukanya Charuchandra has written for The Scientist, Johns Hopkins Magazine and Firstpost. Her writing interests feature biology, medicine and archaeology.

Scroll To Top