Now Reading
The Case for Creating Science Courts

The Case for Creating Science Courts

Photo: Vlad Tchompalov/Unsplash

Science and politics intersect on many levels. Many scientists depend on public funding to conduct their research – an inherently political process – and political leaders depend on scientists to inform their policy decisions. As well, the ethical ramifications of scientific research bear directly on ordinary citizens, who depend on governments to determine what lines of scientific inquiry are supported.

But Zeynep Pamuk, a political scientist at the University of California, San Diego, feels the interplay between these two worlds – science and politics – has only begun to be properly explored. Pamuk’s interest in this relationship began early in her career, when she started to examine the discourse surrounding climate change. “I realised that there was great scholarship on climate change, but it didn’t get a lot of uptake,” Pamuk told Undark. “So I became interested in why that was the case. What is it about the intersection about science and politics that’s become so pathological?” She eventually saw that “there wasn’t as much scholarship on that question, especially from within political science.”

In her new book, Politics and Expertise: How to Use Science in a Democratic Society, Pamuk outlines new directions that she believes the relationship between science and politics might take, rooted in the understanding that scientific knowledge is tentative and uncertain. Among her proposals is the resurrection of the idea of a science court, an idea first put forward in the 1960s.

The interview was conducted over Zoom and has been edited for length and clarity.

Undark: Much has been written on the importance of scientific literacy, and, especially in the last few years, on the problem of science denial and on the trust, or lack thereof, in science and scientists. But you frame your investigation very differently. What was your starting point?

Zeynep Pamuk: There’s a lot of discussion about denial of science, why citizens are so ignorant, why they don’t understand science. And I wanted to change the conversation, by understanding how the way science is done, how scientific research is conducted, how the choices that scientists and science administrators make – at far earlier stages of the research process – shaped the uptake and framing of the debate. So I think the contours of the debate were too narrow.

In your book, you talk about the idea of scientists taking responsibility for their research. That’s an idea with a long history – one thinks of the atomic bomb, for example, and genetic engineering. How do you see this issue of responsibility for scientists?

I’m interested in the question from the perspective of how a democratic society deals with the presence within it of this knowledge-producing but fairly autonomous community of scientists. So when I say that scientists need to take responsibility, I don’t mean it in the way that a lot of people said about the atomic scientists – that they could be held morally responsible.

Sure, I don’t disagree with that. But I was more interested in what society could do to regulate these kinds of high-risk scientific endeavors. And I didn’t think that the answer that scientists have to be morally responsible, to examine themselves and restrain themselves – the idea that they self-monitor, that they can be trusted to do that – was a sufficient answer.

Are you saying that science requires more regulation or oversight?

In certain kinds of very high-risk scientific research, these decisions should be made collectively, or at least by authorised political representatives. They should have more public debate around them. The Obama administration at one point put a moratorium on lethal pathogen research. There’s some coverage, not a huge amount of discussion; and then it reversed its decision three years later. It’s very difficult to find any paper trail about what happened. What was the discussion? What was the reasoning? Did they decide it was now safe?

It’s very hard to know what happened. And it seems like this is hugely consequential on a global, planetary level. So there has to be more discussion around it. This kind just kind of risk decision should not be left purely to scientists. We can assign them responsibility – but it doesn’t mean that they should they alone should be responsible for making this very consequential decision.

Should governments be able to tell scientists that certain lines of inquiry are off-limits?

I think the answer is yes. I’m not going to say this area should be restricted or that area – I think this is a collective decision. My opinions are my personal opinions as a citizen of a democratic society. But I think more debate is appropriate. And in certain cases, there might be a lot of support for undertaking risky research, because people imagine that it will bring a better world – but in other cases, there are no conceivable benefits. I’m thinking maybe of killer robots, as one example. Or maybe that the benefits don’t justify the risks. So it’s something that would come out of debate. But I think there can certainly be areas where limits should be placed on research.

One very interesting idea in your book is the notion of a science court. What exactly is a science court? How would it work, and what would its purpose be?

I stumbled upon this idea as I was looking at debates around science in the 1970s. This was a period where there was a lot of debate, because scientists were very influential; the glow of the World War II victory was around them. They had direct influence over politics. And but of course, they disagreed among themselves. And a scientist called Arthur Kantrowitz suggested a science court, basically to adjudicate between disagreeing scientists, so that the public confusion that this caused would just come to a stop.

But he had a strict division of facts and values: This would be the factual stage, and then the values would be discussed later. And for the reasons I just mentioned, I didn’t think that that would make sense. You can’t debate the science independently from the context of policy context or the context of use. And also, I thought this was a fairly elitist institution, with only scientists participating.

But you feel there was something of value in Kantrowitz’s idea?

I wanted to reimagine it. I took his structure, with different, disagreeing scientists making a case for their own views; but I wanted to have citizens there, and I want it to be a more overtly policy-oriented institution. So the way I imagine it, there would be a scientifically-informed policy debate – like, for example, should we have strict lockdowns, or a less strict Covid-19 policy?

So it would have two clear sides – and then scientists for both sides would defend their views. They would ask each other questions that would help reveal the uncertainty of their views, the evidence that they’re marshalling. And then the citizen jury would be randomly selected. They would bring their own political beliefs, they would listen to the scientists, and they would make a policy proposal, selecting one of the two positions.

But scientists and politicians already argue a great deal. How would a science court be an improvement on the current system, in which there’s already a lot of debate?

It’s true that scientists constantly argue among themselves, but I’m not sure the scientists have unmediated arguments in front of a public audience. I think that is discouraged within current advisory systems. Maybe the climate experience led to this. But even before that, in the ’70s and ’80s, there was this norm that scientists argue behind closed doors within scientific advisory committees, but then they present a united front when they give advice.

So there’s one authoritative scientific advisory body, and that basically gives a consensus recommendation. So publicly-oriented scientific disagreement is seen to be something that undermines trust in science – that emphasising the uncertainty will mean anything goes, that scientists don’t know anything. And I wanted to push back against that. I thought a properly organised institution, where scientists are facing one another directly, and not necessarily mediated by politicians who have their own agenda, and who just want to cherry-pick the science that serves it – that could have healthy effects for clarifying the factual basis of this political decision making for the citizenry.

When we think of scientists struggling to present a united front on a topic of great public interest, the current coronavirus pandemic certainly comes to mind. But you argue that a lot of those disagreements were hidden from view?

We saw this during the COVID-19 pandemic, with the masking advice in the US. It was initially presented as, “This is our position: masks do not help; do not wear them.” Fauci said this, the Surgeon General said this, [former White House adviser] Deborah Birx said this – they were unanimous in this. And we did not hear from anybody within the scientific community.

And of course, debates were happening within the scientific community about the evidence for the benefits of masks, but we did not hear the opposing side: people saying ‘Oh, masks are probably very effective,’ or at least, ‘We don’t know that masks are effective, and this is our level of uncertainty.’ We didn’t hear the opposing view at all.

And I think that hurt the case, because it made the reversal very difficult; it made people not trust the masking advisory when it came in, in April 2020. So that was a good example of the kind of thing where a science court would have helped.

But on the other hand, if the public had a greater window onto scientific arguments as they unfolded, maybe they just wouldn’t listen to scientists at all. As you suggested, they might think, “Oh, look – they can’t even agree among themselves.”

Yeah, I think that’s true. That’s the risk. If people see disagreement, they might think scientists can’t agree. But that usually is the case. But the one thing I will say is, that when you see scientists disagreeing, you also see the scope of disagreement. For example, you don’t see scientists saying “vaccines are ineffective,” or “vaccines are hugely dangerous.” So you see what sorts of things they’re disagreeing on, and that gives you a sense of where the debate is at.

If you overstate what scientists know, where the consensus lies, then there is a chance – and this happens all the time – that it will turn out to be wrong. And I think that undermines public trust even more than a candid admission that, at this point in time, scientists are disagreeing on a certain point.

But, wouldn’t having ordinary citizens act as arbiters in scientific disagreements bring us back to the issue of scientific literacy? For example, if some members of the public don’t understand the difference between a virus and bacteria, then they’re in a very poor position to evaluate strategies for fighting infectious disease – right?

Yes, I agree with that completely. I think improvements in scientific literacy would be critical for an institution like this to succeed. Then the question is, how much literacy? I think we can have a citizenry that is more literate about the scientific method, about the difference between viruses and bacteria. But that still wouldn’t mean that they’d become experts, or that they would need to have a Ph.D. to participate in the science court.

This article was first published by Undark.

Scroll To Top