The authority of science lies not just in its methods but also with the idea that the conclusions that scientists reach are not affected by their politics. Although scientific research can and does influence political discussion, popular attitudes, policies, and decision-making, its conclusions—always uncertain, always fragile—should always be scientifically defensible regardless of one’s perspective, political or otherwise.

Thus, there is a line that scientists and scientific organizations must grapple with. While science may be often political, should science take sides in electoral politics?

Universities, funding agencies, research centers, and scientific journals that publish research are usually very careful to avoid taking overtly political, or at least partisan, stances. The Trump administration has arguably tested these waters in recent years, with jabs undermining the scientific establishment and basic scientific facts, jabs that have been unprecedented in modern times. Reflecting substantial outrage within many quarters of the scientific community, Bill Nye (“the Science guy”) led a March for Science in 2017 and publicly opined that “Science has always been political…but you don’t want it to be partisan.

Nye’s comments reflect a difficult line that scientists often need to consider when they engage with public policy—a “Rubicon” line, if you will, that, once crossed, signals an important departure from existing norms. In the lead-up to the recent U.S. Presidential election, some parts of the scientific establishment crossed the Rubicon, taking clear and vociferous stances on the Presidential candidates.

In September, the Editor-in-Chief of the journal Science wrote a scathing article entitled “Trump lied about Science.” This was followed by other strong critiques by both the New England Journal of Medicine and the cancer research journal, Lancet Oncology.

Several other journals soon followed, with editorial endorsements of presidential candidate Joe Biden. The journal Nature publicly endorsed Biden and argued that, in Trump’s case, “No president in recent history has tried to politicize government agencies and purge them of scientific expertise on the scale undertaken by this one. The Trump administration’s actions are accelerating climate change, razing wilderness, fouling air and  killing more wildlife—as well as people.” Scientific American argued that “Trump's rejection of evidence and public health measures have been catastrophic in the U.S.

One focus of these journals’ ire was the Trump administration’s handling of the COVID-19 pandemic. But they also extended their criticism to the failure to use scientific evidence in government decision-making in general, stating for example that “...Trump's refusal to look at the evidence and act accordingly extends beyond the virus.” Although the focus was on Trump’s malfeasance, some of these journals heartily endorsed Biden; for example, Scientific American stated “It's time to move Trump out and elect Biden, who has a record of following the data and being guided by science.” This was the first time in the journal’s 175-year history that it endorsed a Presidential nominee.

Crossing the partisan Rubicon in this way clearly reflects what was perceived to be at stake in the recent election. However, regardless of the validity or accuracy of these evaluations of the Trump administration, there also may be costs. The costs of such partisan stances could, for example, affect public trust in science and, in doing so, influence other related outcomes such as scientists’ ability to convince the public to follow scientific recommendations.

To understand what is at stake when scientific authorities are perceived to be partisan, we conducted a large online survey experiment a week before the U.S. Presidential election. Participants were randomly assigned to read either a news article that generically described a scientific journal or a news article with the same description of the scientific journal that also reported the journal’s actual statements regarding Biden and Trump.

To maximize external validity and generalizability, we drew from the five high-profile journals described above: Science, Nature, New England Journal of Medicine, Lancet Oncology, and Scientific American. In each case, respondents didn’t see the original article (which would have been too long) but rather read a news report about it. This also reflects what most people actually experience in the media environment – they don’t read actual sources but rather summaries and reporting by journalists.

After reading one of these two articles, survey respondents were asked about their trust in scientists, scientific journals, and science in general. We also measured their planned compliance with scientific recommendations regarding safety behaviors related to COVID-19, such as wearing face masks.

Before collecting and analyzing our data, we pre-registered this study, recording in advance what we were doing and what we expected to find, and made sure to recruit a sample that was sufficiently large and diverse in terms of political beliefs and demographics such as race, gender, age, and geography (n = 2,975).

We found that trust in science decreased among respondents who read that a scientific journal had taken a partisan position on the election compared to those who read about a journal that did not. This finding was most pronounced for political conservatives. In addition, reporting less trust in science was associated with lower compliance with scientific recommendations regarding COVID-19.

We then ran a second survey, with a different sample and recruitment method, and obtaining a representative sample of the U.S. population. The results of this second survey replicated the first study. Thus, we find robust evidence that partisan stances by scientific publications can lower trust in science.

Due to the experimental design of our study, the effects we found can’t be due to people’s initial views about science coming into the survey. Our findings, which have not been published yet, point to the fact that there are indeed costs when scientific publications take a partisan stance. Being perceived as partisan may harm the perceived legitimacy of science. Such effects on trust in science should be taken into account when considering how political partisanship may influence the public’s trust in science and scientific institutions. It is an open question whether such effects may accumulate over time or whether they may dissipate after this recent, very polarized and polarizing election.


For Further Reading

Krause, N. M., Brossard, D., Scheufele, D. A., Xenos, M. A., & Franke, K. (2019). Trends—Americans’ Trust in Science and Scientists, Public Opinion Quarterly, 83(4), 817–836. https://doi.org/10.1093/poq/nfz041

Nisbet, E. C., Cooper, K. E., & Garrett, R. K. (2015). The partisan brain: How dissonant science messages lead conservatives and liberals to (dis) trust science. The ANNALS of the American Academy of Political and Social Science, 658(1), 36–66. https://doi.org/10.1177/0002716214555474

Young, Kevin L. (2020). “Progress, Pluralism and Science: Moving from Alienated to Engaged Pluralism”, Review of International Political Economy, Advanced Online Publication. https://doi.org/10.1080/09692290.2020.1830833

 

Bernhard Leidner is an Associate Professor of social, political, and peace psychology in the Department of Psychological and Brain Sciences at the University of Massachusetts Amherst.

Kevin L. Young is an Associate Professor in the Department of Economics at the University of Massachusetts Amherst.

Stylianos Syropoulos is a PhD student in the Psychology of Peace and Violence Program, at the University of Massachusetts Amherst.