Vasudevan Mukunth is the science editor at The Wire.
This article is the first of a new column called ‘Infinite in All Directions’. Once our now-defunct science newsletter, IIAD is being revived in the form of this blog-style weekly column written by Vasudevan Mukunth, science editor.
The recent conversation about preprints, motivated by Tom Sheldon’s article in Nature News, focused on whether access to preprint manuscripts is precipitating bad or wrong articles in the popular science journalism press. The crux of Sheldon’s argument was that preprints aren’t peer-reviewed, which leaves journalists with the onerous task of validating their results when, in fact, that has been the traditional responsibility of independent scientists hired by journals to which the papers have been submitted. I contested this view because it is in essence a power struggle, with the scientific journal assuming the role of a knowledge-hegemon.
An interesting example surfaced in relation to this debate quite recently, when two researchers from the Indian Institute of Science, Bengaluru, uploaded a preprint paper to the arXiv repository claiming they had detected signs of superconductivity at room temperature in a silver-gold nanostructure. They simultaneously submitted their paper to Nature, where it remains under embargo; in the meantime, public discussions of the paper’s results have been centred on information available in the preprint. Science journalists around the world have been reserved in their coverage of this development, sensational though it seems to be, likely because, as Sheldon noted, it hasn’t been peer-reviewed yet.
At the same time, The Hindu published an article highlighting the study. For its part, The Wire commissioned an article – since published here on August 6 – discussing the preprint paper in greater detail, with comments from various materials scientists around the country. The article’s overwhelming conclusion seemed to be that the results look neat to theorists but need more work to experimentalists, and that we should wait for Nature‘s ‘verdict’ before passing judgment. Nonetheless, the article found it fit, and rightly so based on the people quoted, to be optimistic.
A few days later, there emerged a twist in the plot. Brian Skinner, a physicist at the Massachusetts Institute of Technology, uploaded a public comment to the arXiv repository discussing, in brief, a curious feature of the IISc preprint. He had found that two plots representing independent measurements displayed in the manuscript showed very similar, if not exact, noise patterns. Noise is supposed to be random; if two measurements are really independent, their respective noise patterns cannot, must not, look the same. However, the IISc preprint showed the exact opposite. To Skinner – or in fact to any observer engaged in experimental studies – this suggests that the data in one of the two plots, or both, was fabricated.
This is obviously a serious allegation. Skinner himself has not attempted to reach any conclusion and has stopped at pointing out the anomaly. At this juncture, let’s reintroduce the science journalist: what should she do?
In a world without preprints, this paper would not have fed a story until after a legitimate journal had published the paper, and in which case the science journalist’s article’s legitimacy would bank on the peer-reviewers’ word. More importantly, in a world without preprints, this would have been a single story – à la the discovery of the Higgs boson or gravitational waves from colliding neutron stars. In a world with preprints, this has become an evolving story even though, excluding the “has been submitted to a journal for review” component, the study itself is not dynamic. (Contrast this for example to the search for dark matter: it is ongoing, etc.)
Against this context, the arguments Sheldon et al have put forth assumes a new clarity. What they’re saying is that the story is not supposed to be evolving, and that science journalists forced to write their stories based only on peer-reviewed papers would have produced a single narrative of an event fixed at one point in space and time. Overall, that if journalists could have waited for the paper to be peer-reviewed, they would have been able to deliver to the people a more finished tale, and whose substance and contours enjoy greater consensus within the scientific community.
This may seem like a compelling reason to not allow journalists to write articles based on preprints until you stop to consider some implicit assumptions that favour peer-review.
First off, peer-review is always viewed as a monolithic institution whereas the people quoted in an article are viewed as individuals – despite the fact that both groups are (supposed to be) composed of peers acting independently. As a result, the former appears to be indemnified. In The Wire article, the people quoted were Vijay Shenoy, T.V. Ramakrishnan, Ganapathy Baskaran, Pushan Ayyub and an unnamed experimentalist. The author, R. Ramachandran, (together with the editor – me) also cited multiple studies and historical events for the necessary context and reminded the reader on two occasions that the analysis was preliminary. What the people get out of peer-review, on the other hand, is a ‘yes’/’no’ answer that, in the journal’s interests, are to be considered final.
In fact, should review – peer- or journalistic – fail, journalism affords various ways to deal with the fallout. The scientists quoted may have spoken on the record, and their contact details will be easily findable; the publication’s editor can be contacted and a correction or retraction asked for; in some cases (including The Wire‘s), a reader’s editor acting independently of the editorial staff can be petitioned to set the public record straight. With a journal, however, the peer-reviewers are protected behind a curtain of secrecy, and the people and the scientists alike will have to await a decision that is often difficult to negotiate with. The numerous articles published by Retraction Watch are ready examples.
Second, it is believed that peer-reviewers perform checks that science journalists never can. But where do you draw the line? Do peer-reviewers check for all potential problems with a paper before green-flagging it? More pertinently, are they always more thorough in their checks than good science journalists can be? In fact, there is another group of actors here that science journalists can depend on: scientists who are publicly critiquing studies on their Twitter or Facebook pages and their blogs. I mention this here to quote the examples of Katie Mack, Adam Falkowski, Emily Lakdawalla, etc. – and, most of all, of Elizabeth Bik, a microbiologist. Bik has been carefully documenting the incidence of duplicated or manipulated images in published papers.
In my free time, I scan scientific literature for problematic images.
Data on 20,000 papers: https://t.co/ItWJ87pehr
Or read this thread. https://t.co/n38M3e10uu— Elisabeth Bik (@MicrobiomDigest) August 28, 2017
Circling back to peer-review’s being viewed as a monolith: many of the papers Bik has identified were published by journals after they were declared ‘good to go’ by review panels. So by casting their verdict as final, by describing each scientific paper as being fixed at a point in time and space, journals are effectively proclaiming that what they have published need not be revisited or revised. This is a questionable position. On the other hand, by casting the journalistic enterprise as the documentation of a present that is being constantly reshaped, journalists have access to a storytelling space that many scientific journals don’t afford the very scientists that they profit from.
Where this enterprise turns risky, or even potentially unreliable, is when it becomes dishonest about its intentions – rather, isn’t explicitly honest enough. That is, to effect change in what journalism stands for, we also have to change a little bit of how journalism does what it does. For example, in The Wire‘s article, the author was careful to note that (i) only the paper’s publication (or rejection) can answer some questions and perhaps even settle the ongoing debate, (ii) some crucial details of the IISc experiment are missing from the preprint (and likely will be from the submitted manuscript as well), (iii) the article’s discussion is based on conversations with materials scientists in India and (iv) the paper’s original authors have refused to speak until they have heard from Nature. Most of all, the article itself does not editorialise.
These elements, together with an informed readership, are necessary to stave off hype cycles – unnecessary news cycles typically composed of two stories, one making a mountain of a molehill and the next declaring that the matter at hand has been found to be a molehill. The simplest way to sidestep this fallacy is to remember at all stages of the editorial process that all stories will evolve irrespective of what those promoting it have to say. Of course, facts don’t evolve, but what conclusion a collection of facts lends itself to will. And so will opinions, implications, suggestions and whatnot. This is why attempting to call out science journalists who respect these terms of their enterprise will not work – because doing so also passively condones hype. What will work is to knock on the doors of those unquestioning journalists who pander to hype above all else.
This prescription is tied to one for ourselves: as much as science journalists want to reform the depiction of the scientific enterprise, moving it away from the idea that scientists find remarkable results with 100% confidence all the time (which is the impression journals give), they – rather, we – should also work towards reforming what journalism stands for in the people’s eyes. Inasmuch as science as well as journalism are bound by the pursuit of truth(s), it is important for all stakeholders to remember, and to be reminded, that – to adapt what historian of science Patrick McCray said tweeted – it’s about consensus, not certainty. Should they have a problem with journalists running a story based on a preprint instead of a published paper, journals can provide a way out (for reasons described here) by being more open about peer-review, what kind of issues reviewers check for and how journalists can perform the same checks.
This article was originally published on the author’s blog and has been republished here with edits for style.