Killing an Animal in the Name of Science

The destructive way authority sometimes influences people was shown sixty years ago in groundbreaking experiments conducted by Stanley Milgram. The famous Yale psychologist showed that a large majority of people administrated painful and potentially lethal electric shocks to an innocent human victim (who was actually unharmed, although participants did not know this) during a fake study on learning. While these studies were reproduced in many countries and in various settings (including virtual reality), the reasons underlying this powerful and frightening phenomenon remain to be fully clarified.

According to Milgram, his participants transferred their own agency and responsibility to the experimenter and thus became “thoughtless agents of action.” However, many scholars consider that a participant’s willingness to administer electric shocks cannot be properly explained by blind obedience. Instead, it may be a function of their active identification with the scientific enterprise underlying the experiment. However, strong proofs of this hypothesis are still lacking.

A New Experiment Involving An Animal

In order to fill this gap, we created a completely new experimental situation involving an animal victim. This inflicts less psychic stress on human participants, but also addresses the genuine moral conflict created by the massive use of animals in experimentation. While the earlier view of animals as insensitive machines has been widely disproved by scientific studies revealing the complex mental lives of animals, in laboratories they are still considered as scientific tools. Worldwide, more than 115 million of them are killed every year for research purposes. This also creates moral dilemmas and distress for laboratory staff who perform invasive or painful experiments.

In our recent experiments, modeled on Milgram’s methods, participants were required to incrementally administer a noxious chemical substance to a large (20-inch) fish as part of a learning experiment, leading to the death of the animal. The fish was actually a biomimetic robot that swam in a tank across the room from the participant, who thought it was real.

You can see a short video of the setting here: https://www.youtube.com/watch?v=exNHKprKNwI

The administered substance was supposed to stimulate learning in the context of research on Alzheimer’s disease. However, an important side effect of the drug was its consequences on vital functions at high dosages. Participants were informed that the toxic substance would be painful and lethal at higher doses for the animal.

In order to perform the task, participants had to click successively on twelve buttons, which each time triggered the injection via a motorized syringe into the water of extra doses of the toxic pharmacological substance. When they were reluctant to continue, a research assistant asked them to keep on pressing the buttons.

During the task, participants were asked to observe the behavior of a fish on a supposed learning task, and were told that the twelve-dose drug administration would influence the fish’s competence on the task. Below the buttons, the expected probability of the death of the fish was written, as follows: 0% probability of death (button 1); 33% (button 3); 50% (button 6); 75% (button 9); and 100% (button 12). Moreover, the cardiac pace of the animal was shown on a screen, which also produced auditory feedback to indicate cardiac distress.

As in Milgram’s studies, many participants (both males and females) stuck with the task until the end, injecting the twelve doses leading to certain death. More precisely, while 28% of the participants refused to begin the task, about 44% finished the experiment (injecting the 12 doses and killing the fish), between 1% and 6% stopped at each intermediate level, and a full 44% went all the way, killing the fish.

Killing An Animal For Science

In another experiment, we reasoned that if science represents a cultural authority, the mere suggestion of science would increase a participant’s willingness to go along. In order to do this, we repeated the same experiment with the fish, but this time we made our participants think either positively or negatively about science:

  • Half the participants were assigned to a “science promotion” condition where they wrote write down three things that were important about science, what they liked about science, and what they felt they had in common with scientists.
  • The other half of the participants were assigned to a “science critical” condition where they had to list three things they believed to be problematic about science, what they disliked about science, and what differentiated them from scientists.
  • Then they all did the learning task with the fish.

As we hypothesized, those in a pro-scientific mindset were more willing to follow the experimenter’s instructions to keep going, thus inflicting more and more pain on the fish.

Also, based on other questions we asked our participants, people who placed more value on non-egalitarian and hierarchical relationships among social groups, and believed more that humans are more valuable than other species, injected more toxic doses to the fish. And non-vegetarians were more likely to kill the animal.

The fact that just writing about good aspects of science (regardless of one’s own prior attitudes) predicts people’s harmful behavior toward an animal suggests that obedience is probably not as blind as Milgram claimed—it is also influenced by explicit motives. Science represents today the most influential cultural authority. In our experiment, we showed that ordinary citizens can agree to inflict pain and to kill an animal not merely to obey an authority figure, but in the name of science.


For Further Reading

Bègue, L., & Vezirian, K. (2021). Sacrificing animals in the name of scientific authority: The relationship between pro-scientific mindset and the lethal use of animals in biomedical experimentation. Personality and Social Psychology Bulletin. doi: 10.1177/01461672211039413

Dolinski, D., & Gryb, T. (2020). The social psychology of obedience towards authority. London: Routledge.
 

Laurent Bègue is professor at University Grenoble Alpes. He authored The Psychology of Good and Evil, Oxford University Press, 2016. His research deals with human aggression.

Kevin Vezirian is a PhD student at University Grenoble Alpes and is interested in psychological processes underlying animal objectification.

 

Right Time, Right Place, Right Issue: When Research Matters to Legislators

How do legislators use science? It’s not an easy question for scientists to answer. Many are hard pressed to identify even one concrete example of an evidence-based legislative action. So, we sat down with policymakers to ask them the same question. What we heard will surprise those who are pessimistic that science is used at all in policymaking. We now can identify several ways that research flows, much like a river, through the policy landscape.

First, we reached out to 123 state legislators in Indiana and Wisconsin. These legislators then nominated an additional 32 colleagues who they felt were exemplary research users. We also supplemented our sample with 13 key policy players (such as governors, heads of lobbying firms, former legislators). Confidence in our findings can be inspired by the high response rates in the hard-to-access population of policymakers (60%, 84%, and 100% respectively).

When Research is a Hard Sell

We learned there are policies and people and places that can frustrate and facilitate research use. First, regarding policies, research was less likely to hold sway on polarized moral issues, such as reproductive rights. Research was generally less influential on issues driven by ideology, such as beliefs about whether government is the problem or the solution. There was little room for research on issues driven by passion, such as tragic personal stories. As one Republican relayed, “If a bill is named after somebody . . . like Sarah’s Bill, then you know research is screwed.”

Where Research Can Have an Impact      

Policymakers turned to research more frequently on emerging issues such as opioid use, concussions, and rural Internet availability. Research also was more likely to influence issues where policymakers did not have established positions or where consensus had been reached, such as with the need for  criminal justice reform. Research also appeared to flow more freely on the “million . . . technical issues . . . that’s really the majority of the [legislature’s] work.” One Republican, who worked on property tax assessment and land annexation, said technical issues don’t get a lot of media attention but still comprise about 80% of the policy agenda:

“There’s no quote ‘Republican or Democratic’ theory about them and there’s no big contributor to your campaign who cares. . . It’s just like you’re stripping it down to the essence of good government. . . I think you have . . . much more ability to govern in sort of an evidence-based way.”

Who Seeks Science

However, research use varies by people. Some legislators told us they rely on intuition or gut instinct, whereas others factor in research. As an example, legislators face hundreds of bills each session, which makes it literally impossible to read and study each one. So, legislators specialize and develop expertise on a particular issue. They become known as the “go-to” legislator, whom colleagues turn to for advice on what positions to take. To attain and maintain a reputation as an issue expert, they often use in-depth research. Also, members of the minority party more often turn to research evidence; the “minority party has to win more of its arguments based upon facts” because it lacks access to other levers of power.

Right Place, Right Time

Timing also mattered. Research more often is used early on in the policy process when the issue is still a work in progress and policymakers have not yet staked out a position. Where the research was introduced also mattered. The most expertise on specific issues lies in committees, where bills are developed before hitting the floor.

Regardless of the time or place, research is used in a political sphere where decisions are reached through negotiation. So, for policy purposes, the utility of research depends not only on its credibility to allies, but also to adversaries. Policymakers screen the credibility of research less by the methods and more by the source, particularly the source’s reputation as reliable and nonpartisan. Despite its value, policymakers believe that nonpartisan research is difficult to find.

To understand research use in policymaking, we must think like a river. Policy issues infused with morality, ideology, or passion flow through narrow, nonnegotiable routes that restrict research use. However, research can navigate the policy landscape on issues that are new, technical, or open to consensus. Research use is facilitated when it enters through the port of a committee and frustrated when it enters downstream where it hits the rapids of unrelenting time pressures. To guide next steps, policymakers provided practical advice to those interested in communicating research to them. Legislators also identified multiple ways that research contributes to their effectiveness as policymakers and to the policy process. See the works listed below for more about these recommendations.


For Further Reading

 Bogenschneider, K., & Bogenschneider, B. N. (2020). Empirical evidence from state legislators: How, when, and who uses research. Psychology, Public Policy, and Law, 26(4) 413-424. https://doi.org/10.1037/law0000232

Bogenschneider, K., Day, E., & Parrott, E. (2019). Revisiting theory on research use: Turning to policymakers for fresh insights. American Psychologist, 74(7), 778–793. https://doi.org/10.1037/amp0000460
 

Karen Bogenschneider is a Rothermel-Bascom professor emeritus of Human Ecology at the University of Wisconsin-Madison. Her expertise is the study, teaching, and practice of evidence-based family policy. She is known for her work on the Family Impact Seminars, and is co-author of a forthcoming second edition of Evidence Based Policymaking: Envisioning a New Era of Theory, Research, and Practice.

Bret N. Bogenschneider is an assistant professor of business law in the Luter School of Business at Christopher Newport University.  His expertise is in tax law and policy, and he is author of the recently released How America was Tricked on Tax Policy

 

The Scientific Establishment Crosses the Rubicon. What’s the Risk?

The authority of science lies not just in its methods but also with the idea that the conclusions that scientists reach are not affected by their politics. Although scientific research can and does influence political discussion, popular attitudes, policies, and decision-making, its conclusions—always uncertain, always fragile—should always be scientifically defensible regardless of one’s perspective, political or otherwise.

Thus, there is a line that scientists and scientific organizations must grapple with. While science may be often political, should science take sides in electoral politics?

Universities, funding agencies, research centers, and scientific journals that publish research are usually very careful to avoid taking overtly political, or at least partisan, stances. The Trump administration has arguably tested these waters in recent years, with jabs undermining the scientific establishment and basic scientific facts, jabs that have been unprecedented in modern times. Reflecting substantial outrage within many quarters of the scientific community, Bill Nye (“the Science guy”) led a March for Science in 2017 and publicly opined that “Science has always been political…but you don’t want it to be partisan.

Nye’s comments reflect a difficult line that scientists often need to consider when they engage with public policy—a “Rubicon” line, if you will, that, once crossed, signals an important departure from existing norms. In the lead-up to the recent U.S. Presidential election, some parts of the scientific establishment crossed the Rubicon, taking clear and vociferous stances on the Presidential candidates.

In September, the Editor-in-Chief of the journal Science wrote a scathing article entitled “Trump lied about Science.” This was followed by other strong critiques by both the New England Journal of Medicine and the cancer research journal, Lancet Oncology.

Several other journals soon followed, with editorial endorsements of presidential candidate Joe Biden. The journal Nature publicly endorsed Biden and argued that, in Trump’s case, “No president in recent history has tried to politicize government agencies and purge them of scientific expertise on the scale undertaken by this one. The Trump administration’s actions are accelerating climate change, razing wilderness, fouling air and  killing more wildlife—as well as people.” Scientific American argued that “Trump's rejection of evidence and public health measures have been catastrophic in the U.S.

One focus of these journals’ ire was the Trump administration’s handling of the COVID-19 pandemic. But they also extended their criticism to the failure to use scientific evidence in government decision-making in general, stating for example that “...Trump's refusal to look at the evidence and act accordingly extends beyond the virus.” Although the focus was on Trump’s malfeasance, some of these journals heartily endorsed Biden; for example, Scientific American stated “It's time to move Trump out and elect Biden, who has a record of following the data and being guided by science.” This was the first time in the journal’s 175-year history that it endorsed a Presidential nominee.

Crossing the partisan Rubicon in this way clearly reflects what was perceived to be at stake in the recent election. However, regardless of the validity or accuracy of these evaluations of the Trump administration, there also may be costs. The costs of such partisan stances could, for example, affect public trust in science and, in doing so, influence other related outcomes such as scientists’ ability to convince the public to follow scientific recommendations.

To understand what is at stake when scientific authorities are perceived to be partisan, we conducted a large online survey experiment a week before the U.S. Presidential election. Participants were randomly assigned to read either a news article that generically described a scientific journal or a news article with the same description of the scientific journal that also reported the journal’s actual statements regarding Biden and Trump.

To maximize external validity and generalizability, we drew from the five high-profile journals described above: Science, Nature, New England Journal of Medicine, Lancet Oncology, and Scientific American. In each case, respondents didn’t see the original article (which would have been too long) but rather read a news report about it. This also reflects what most people actually experience in the media environment – they don’t read actual sources but rather summaries and reporting by journalists.

After reading one of these two articles, survey respondents were asked about their trust in scientists, scientific journals, and science in general. We also measured their planned compliance with scientific recommendations regarding safety behaviors related to COVID-19, such as wearing face masks.

Before collecting and analyzing our data, we pre-registered this study, recording in advance what we were doing and what we expected to find, and made sure to recruit a sample that was sufficiently large and diverse in terms of political beliefs and demographics such as race, gender, age, and geography (n = 2,975).

We found that trust in science decreased among respondents who read that a scientific journal had taken a partisan position on the election compared to those who read about a journal that did not. This finding was most pronounced for political conservatives. In addition, reporting less trust in science was associated with lower compliance with scientific recommendations regarding COVID-19.

We then ran a second survey, with a different sample and recruitment method, and obtaining a representative sample of the U.S. population. The results of this second survey replicated the first study. Thus, we find robust evidence that partisan stances by scientific publications can lower trust in science.

Due to the experimental design of our study, the effects we found can’t be due to people’s initial views about science coming into the survey. Our findings, which have not been published yet, point to the fact that there are indeed costs when scientific publications take a partisan stance. Being perceived as partisan may harm the perceived legitimacy of science. Such effects on trust in science should be taken into account when considering how political partisanship may influence the public’s trust in science and scientific institutions. It is an open question whether such effects may accumulate over time or whether they may dissipate after this recent, very polarized and polarizing election.


For Further Reading

Krause, N. M., Brossard, D., Scheufele, D. A., Xenos, M. A., & Franke, K. (2019). Trends—Americans’ Trust in Science and Scientists, Public Opinion Quarterly, 83(4), 817–836. https://doi.org/10.1093/poq/nfz041

Nisbet, E. C., Cooper, K. E., & Garrett, R. K. (2015). The partisan brain: How dissonant science messages lead conservatives and liberals to (dis) trust science. The ANNALS of the American Academy of Political and Social Science, 658(1), 36–66. https://doi.org/10.1177/0002716214555474

Young, Kevin L. (2020). “Progress, Pluralism and Science: Moving from Alienated to Engaged Pluralism”, Review of International Political Economy, Advanced Online Publication. https://doi.org/10.1080/09692290.2020.1830833

 

Bernhard Leidner is an Associate Professor of social, political, and peace psychology in the Department of Psychological and Brain Sciences at the University of Massachusetts Amherst.

Kevin L. Young is an Associate Professor in the Department of Economics at the University of Massachusetts Amherst.

Stylianos Syropoulos is a PhD student in the Psychology of Peace and Violence Program, at the University of Massachusetts Amherst.

 

Researchers Get Rejected Too

The website Character & Context exists so that researchers in social and personality psychology can share their interesting findings with the public.  Our blog authors summarize the results of research articles that have gone through peer review and subsequently been published in prestigious journals.

By publishing only the end result of our work, we may be minimizing how much goes into the process.  For every blog on Character & Context, researchers came up with an idea, conducted a study, analyzed the data, refined their idea, probably ran more studies, wrote up their findings, sent the manuscript to a journal, got feedback from the journal, and rewrote the article.  This process usually takes years.

And that is the best case scenario.  Frequently, we put enormous effort and time into studies that don’t work, leaving us with nothing to publish.  And even when our studies do work, journals often reject our papers, and we need to submit our research to a different journal and start over with the review process or, worse, go back and conduct more studies.   Quite often, we are told to go away and given long lists of reasons why our work isn’t good enough to publish.  That sort of negative feedback is a constant part of researchers’ jobs; academia is full of criticism and rejection. 

Recently, I had a week that was a triple whammy of rejection. I had two research manuscripts rejected by journals, and a grant application that wasn’t even discussed by the grant panel -- even though it was a revision that addressed the concerns the panel had the year before.  I was feeling really down, and so I posted on social media about my disappointment and insecurities. What I got was amazing observations, reactions, and advice from some of the smartest and most productive social and personality psychologists out there on how to deal with professional rejection.  What could be more useful?!  So, with their permission, I am going to share some of their comments with you.

Michael Poulin: I feel all of this. I think part of the struggle (for me at least) is that being an academic provides a person with so many ways to fail: we can fail at getting funding, having good ideas, getting data, publishing, recruiting, mentoring, teaching, collaborating, etc... and chances are, at any given time we're all failing in at least one of those domains. In my saner moments, I remember that there are other domains where I'm doing better. And that things might turn around (eventually) in the domain that's causing my current anxiety. I’m working on making those moments more plentiful!

Mark Leary: As I was going through 43 years of research files in preparation for retirement, I was struck again and again by the low proportion of my studies that made it into print.  Many of them were studies that simply didn’t work -- pure failures that I made no effort to publish.  But there were plenty of rejections as well, including ones that went to 3 or 4 journals before I gave up.  And, some of those rejections were quite harsh, like the review that started “This is a perfect example of the worst kind of research in social psychology.”

In a little talk that I gave to our local social psychologists about these failures and rejections, I estimated that perhaps only 30% of the studies I had conducted over the past four decades ever saw the light of day.  And, the researchers in attendance, all of them seasoned and well-known researcherse, agreed that that 30% was probably in the ballpark for them too.

Knowing that other people also have their share of failures and rejections doesn’t take away the frustration of failed goals and wasted time.  But it does show that failure and rejection are part of the research process and not unique to you.  In fact, one seasoned researcher at my talk suggested that, if your work doesn’t fail regularly, you’re probably not tackling sufficiently new and interesting topics.  But still -- it's very frustrating and deflating, and we've all been there.

Brian Nosek: For rejection, I don't think there is any way to remove the sting of receiving it. But, my recovery from it is much improved with practice and recognition that it is par for the course, even for the best of us.  Example on the grant funding front. In 2007/2008, I applied to NIH/NSF maybe 4-5 times with variations of a proposal to create an Open-Source Science Framework and other name iterations. Never funded, defeating rejections. Ultimately dropped it. Picked it back up in 2011 out of frustration and finding a collaborator to give it another go. Worked out better.

Elizabeth Dunn: In my experience, the work that's been the hardest to publish in psych journals has ended up having the biggest impact. But it's always hard to remember that when reading rejection letters!

Jay Van Bavel: I felt this same way as new professor--my first 10 paper submissions were rejected along with all my grant proposals. Then I realized there is a ton of noise in the system (along with huge variation in access to resources). This means rejection is often arbitrary but also that it isn’t about you. For instance, with grants they completely rotate the panel and reviewers for the same proposal. Unfortunately, the only way to deal with so much random noise is just submit a lot over a long period of time.

Roger Giner-Sorolla: No you are not alone! I have a great student who did great work and we keep rolling snake eyes on each of our 3-4 papers. There is feast or famine it seems. And some areas of research are tougher to crack than others.

Heather Mercer Claypool: I have felt like this before, many times. As I've gotten older, my perspective has changed a great deal in ways that I think are much healthier for my well-being regarding these sorts of insecurities. For me, it comes down to three things. First, I'm better at dealing with self-comparison pressures. Are there people who publish way more than me, give more talks, get more grants, have a "bigger" name? Yes, of course. I've not set the research world on fire. But, that's fine. If being the most successful at something is my only route to happiness, I'll never have it.

Second, it's easy to look forward at the next paper, the next application, the next class. These are things yet to do or yet to accomplish. That instills anxiety about incompleteness. But, why not also look back? Hamilton reference here: "Look at where you are; look at where you started." I sometimes think about "grad school Heather." The one practicing my MPA [Midwestern Psychological Association] talk in the Palmer House hotel room, nervous to talk to "big important people," wondering if I'd ever graduate, much less get a job. If I told “grad school Heather” that one day you'll be on the MPA program committee, you'll get into SESP, you'll get a tenure-track job (and tenure), you'll publish regularly, etc., I think “grad school Heather” would be super stoked about that. We move the goalposts on what we consider “successful.” We are all too hard on ourselves. We've accomplished a lot. It's easy to forget when we don't look backwards and only look at the to-do list.

Third, I'm not my job. Many aspects of my job bring me great joy. But, my identity is in being a friend, a spouse, an activist, a family member, etc. It's much easier for me to sweep away the "academic loser" feelings when I remind myself that many other things in life bring me meaning.

Chris Crandall: Stocking the file drawer is inevitable. It’s a fairy tale to believe that all research scouting expeditions come up with gold.

For me, I usually follow two or three lines of research. At least one of them is the kind of research that *always* bears fruit. In my case, I have two lines of research that always turn up usable data: (1) friendship pairs found “in the wild” where the only requirements are photocopies, RAs, and people interacting in public and (2) norms about prejudice, a perennial issue.

Just make sure some of your research is sure-fire, in which any result is interesting. (Easier said than done.)

Amanda Diekman: I get rejected all the time but it has gotten easier with practice. I have learned to discriminate between go-away rejections and maybe-with-a-lot-of-work rejections. A couple of years ago our lab read this piece about setting rejection goals and it was so helpful in reframing. Rejection is just part of getting the work out there.

Linda Skitka: I have definitely felt the same way-- grant failures somehow are especially hard. I like Chris Crandall's model of having several different things going at once, which makes the investment in any one of them less and spreads the risk. Back up plans help: These days if a paper doesn't "land" within the first couple of tries, I pretty much am going to PlosOne or Collabra to get it out there (note: they waive the publication fee upon request); makes it easier to shrug off a rejection at the A-level journals.  I don't really care about where I publish anymore: I just want the joy of FINISHING something!!

Tiffany Ito:  I am a big fan of making the failure part (or whatever you want to call it) of our profession more explicit. Many years ago, a fabulous post-doc in my lab showed me an article about having a vita of failures to my attention. I can’t recall the actual citation, but here are some similar posts: CV of Failures and Sharing the Failures.

It was substantively relevant to a project of ours, but also totally resonated with us professionally. Behind every visible success is a string of detours, misfires, and outright rejections, and we should make it more obvious that to get to success, you have to have these setbacks by creating a vita that lists not only the successful outcomes, but also the failures and rejections.

Shortly thereafter, I was cleaning out the file drawer where I store manuscript-relevant stuff. When I submit something, I clean up all the piles on my desk and stick the critical stuff in this drawer. I was amazed at how many outlets so many of the papers had gone to before being accepted somewhere. It is not that I thought all my work was magically accepted immediately, but I had certainly forgotten the long journey of each individual paper. 

More recently, I was teaching a grad seminar on writing and we were inspired by a writing blog by Linda Sarnecka to create a rejection collection. We’ve invited many others to join us, with the thinking that we can't produce good work unless we get it out there, and in our profession, getting it out there almost always involves rejection.

We also within my program celebrate all the small steps that go into the big successes like accepted papers and funded grants. If we only announce those big outcomes, it obscures all the work that goes into them like getting IRB approval, writing a computer program to collect data, finishing data collection, etc.

To your specific situation, failure is lumpy. It happens to all of us, but is can be clumpy and come in chunks. New ideas can also be riskier. Thus, what seems like a lot of failures in close proximity could just be random and/or could reflect pursuing big new ideas that sometimes take longer to refine.

Lisa Jaremka:  I have definitely been there myself, and I would venture to say the large majority of academics have been as well. I think a huge piece of the puzzle is talking about this publicly so we can de-stigmatize feeling this way. Academia is structurally set up to create this type of experience - high achieving people with incredibly high standards and tons of productivity all around us - so it’s no wonder we all feel like an imposter at one time or another. But at least if we talk about it we won’t feel alone. Plus talking about these experiences helps highlight how there isn’t necessarily something wrong with us if we feel this way; there are larger and more structural forces that are contributing.

Kimberly Rios: I feel you! I was scored decently on a big grant application last year, adopted all the reviewers’ suggestions, and didn’t even get discussed this year. It was soul-crushing. (Oh, and I recently got yet another grant rejection in which a reviewer called my perspective “naive” and accused me of “cherry-picking” theories to discuss. Ouch.) I’m admittedly still not the best at dealing with these setbacks, but it helps a little bit to remind myself that I don’t have as many funding options as my colleagues who study health, neuroscience, clinical, etc. Also, some of the papers of which I’m proudest have originated from rejected grant applications.

Margo Monteith: In addition to many excellent suggestions already ... I really try not to compare myself to others. I do sometimes think, “Wow, how are they getting so much done and apparently frequently with successful outcomes?” But I try to follow up with, “Great for the field!” and not, “That means I’m less than.” I do my best to do good work and try to keep remembering, “That’s enough, I’m enough.” I am sure that this is much easier to do after moving through the ranks, but I think it’s a good philosophy at any career stage, and that it actually helps me to do stronger research.

Michael Olson: I have never overcome imposter syndrome. I feel like a loose, sloppy thinker most of the time. I’m good at some things (writing) and bad at others (stats, addressing alternatives, grants). The biggest lie is that somehow we’re supposed to be good at all of it, and the biggest sin is to express vulnerability. Both are destructive norms. But in my deepest states of self-doubt, I remind myself that I am a pretty capable teacher, and in the long run, that’s probably where I’ll have had the most positive impact.

Alison Ledgerwood: My own pet strategies for dealing with the lumpy failures we all experience and then airbrush out of our public conversations (which is one reason I love this thread so much -- thank you):

1. Close my laptop, say "SCREW THIS, who wants to do it anyway," and go be someone else for a while (chef, gardener, parent, furniture fixer,* or whatever your go-to bins of self-complexity happen to be).

2. Celebrate the successes, multiple times. Like, each stage: paper submitted, paper gets a reject & resubmit, revision submitted, paper gets an R&R, revision submitted, paper gets an R&R, revision submitted, revision accepted, proofs are here, proofs submitted, paper appears online ... all involve some real celebration (like a nice dinner or bottle of wine or SOMETHING that makes me really pause and appreciate it).

Brian Nosek:  I love Alison Ledgerwood's point about defining celebrations along the way. We have incorporated this as much as possible -- celebrate the achievements of the things that we control. Data collection started - YES! finished - YES! manuscript written - YES! manuscript posted as a preprint - YES! With enough milestones defined and celebrated, the publication becomes a smaller and smaller marker of the success of the whole process.

Laurie O'Brien:  When I feel the sting of rejection and failure (which is often), I use many of the strategies others have mentioned. I also like to focus on trying to be a good teacher and mentor. These things are more easily under my control, and I can make a difference in the everyday lives of my students.

Michele Gelfand:  Feel your pain! It's going to sound a bit cheesy but for each project I have both learning goals and performance goals. You can't predict what will happen with publishing or grants, but I love learning and you can't take away all of the learning that happens on a project, whether it's learning a new statistical skill or method, a new literature, empowering a student, etc. I talk to my students about this distinction too to keep them sane!

Amanda Diekman: We also had a super fun and cathartic rejection party where we printed out our most scathing rejections, read them out loud, and burned them. It was comforting to hear how mean reviewers were to all of these amazing scholars, across disciplines, and of course lovely to see the words go up in flame.

Shira Gabriel: Thanks everyone!  You made me feel much better and gave me great ideas.  You also reminded me that it is almost always the right decision to sacrifice ego for social support.  In other words, when you are feeling insecure, tell the people around you.  You’ll find that everyone struggles, hear kind words, and you might even get great ideas for getting through the tough times.


Shira Gabriel is an Associate Professor at SUNY University at Buffalo who studies the need to belong and the social self, as well as an Associate Editor of Character and Context.  Dr. Gabriel feels like an expert at professional failure and is lucky enough to have many friends who are willing to share their own experiences.

Bad Science Evolves. Stopping it Means Changing Institutional Selection Pressures

Science is awesome, but it ain’t perfect. If you’ve been paying attention to the so-called “crises of reproducibility” in the behavioral, biomedical, and social sciences, you know that false positives and overblown effect sizes appear to be rampant in the published literature.

This is a problem for building solid theories of how the world works. In The Descent of Man, Charles Darwin observed that false facts are much more insidious than false theories. New theories can dominate previous theories if their explanations better fit the facts, and scientists, being human, love proving each other wrong. But if our facts are wrong, theory building is stymied and misdirected, our efforts wasted. If scientific results are wrong, we should all be concerned.

How does science produce false facts? Here’s a non-exhaustive list:

  • Studies are underpowered, leading to false positives and ambiguous results [1].
  • Negative results aren’t published, lowering information content in published results [2,3].
  • Misunderstanding of statistical techniques (e.g., misunderstanding of the meaning of p-values [4], incautious multiple hypothesis testing [5,6]) is pervasive, leading to false positives and ambiguous results.
  • Surprising, easily understood results are easiest to publish, putting less emphasis on reliable, time-consuming research that is perceived as “boring.”

These problems are well understood, and, in general, have been understood for decades. For example, warnings about misuse of p-values and low statistical power date to at least the 1960s [7,8]. We know these practices hinder scientific knowledge and lead to ambiguous, overestimated, and flat-out false results. Why, then, do they persist? At least three explanations present themselves.

  1. Incompetence: Scientists just don’t understand how to use good methods. Some of this may be going on, but it can’t be the full story. Scientists are, in general, pretty smart people. Moreover, a field tends to be guided by certain normative standards, at least in theory.
  2. Malicious fraud: Scientists are deliberately obtaining positive results, with a disregard for the truth, for personal gain. There is undoubtedly some of this going on as well (see, for example, the Schön fraud in physics, the Stapel fraud in social psychology, and this fascinating case of peer review fraud in clinical pharmacology).  However, I choose to believe that most scientists are motivated to really learn about the world.
  3. Cultural evolution: Incentives for publication and novelty select for normative practices that work against truth and precision. This is the argument I am presenting here, which Richard McElreath and I fleshed out in a recently submitted paper.  

The Natural Selection of Bad Science

The argument is an evolutionary one [9], and works essentially like this: Research methods can spread either directly, through the production of graduate students who go on to start their own labs, or indirectly, through adoption by researchers in other labs looking to copy those who are prestigious and/or successful. Methods that are associated with greater success in academic careers will, all else equal, tend to spread.

Selection needs some way to operationalize success – or “fitness,” the ability to produce “progeny” with similar traits. This is where the devilishness of many incentives currently operating in scientific institutions (such as universities and funding agencies) comes into play. Publications, and particularly high impact publications, are the currency used to evaluate decisions related to hiring, promotions, and funding, along with related metrics such as the h-index [1]. This sort of quantitative evaluation is troublesome, particularly when large, positive effects are overwhelmingly favored for acceptance in many journals. Any methods that boost false positives and overestimate effect sizes will therefore become associated with success, and spread.  McElreath and I have dubbed this process the natural selection of bad science.

The argument can extend not only to norms of questionable research practices, but also norms of misunderstandings (such as with -values), if such misunderstandings lead to success. Misunderstandings that do not lead to success will rarely be selected for.

An important point is that the natural selection of bad science requires no conscious strategizing, cheating, or loafing on the part of individual researchers. There will always be researchers committed to rigorous methods and scientific integrity. However, as long as institutional incentives reward positive, novel results at the expense of rigor, the rate of bad science, on average, will increase.

A Case Study

Statistical power refers to the ability of a research design to correctly identify an effect. In the early 1960s, Jacob Cohen noticed that psychological studies were dreadfully underpowered, and warned that power needed to dramatically increase in order for the field to produce clear, reproducible results [8]. In the late 1980s, two meta-analyses indicated that, despite Cohen’s warnings, power had not increased [10,11]. We recently updated this meta-analysis [1], and showed that in the last 60 years, there has been no discernible increase in statistical power in the social and behavioral sciences. It remains quite low: the average power to detect a small effect is only 0.24.

This result is consistent with our argument: that incentives for novel, positive results work against individual desires to improve research methods. This is not to say that all studies are underpowered, but it does indicate that the most influential methods, in terms of which methods are adopted by new scientists, may be those that are.

A Computational Model

Although the case study is suggestive, it was important to us to demonstrate the logic of the argument more forcefully. So we built a computational model in which a population of labs studied hypotheses, only some of which were true, and attempted to publish their results. We assumed the following:

  • Each lab has a characteristic methodological power – its ability to correctly identify true hypotheses. Note: this is distinct from statistical power, in that it is a Gestalt property of the entire research process, not only of a particular analysis. If we make the overly simplistic but convenient assumption that all hypotheses are either true or false and all results are either positive or negative, then power is defined as the probability of obtaining a positive result given that one’s hypothesis is true.
  • Increasing power also increases false positives, unless effort is exerted. This represents the idea that one can increase the likelihood of finding a positive result in a cost-free way by using “shortcuts” that allow weaker evidence to count as positive, but increasing the likelihood of finding a true result by doing more rigorous research—such as by collecting more data, preregistering analyses, and rooting hypotheses in formal theory —is costly.
  • Increasing effort lengthens the time between results.
  • Novel positive results are easier to publish than negative results.
  • Labs that publish more are more likely to have their methods “reproduced” in new labs.

We then allow the population to evolve. Over time, effort decreased to its minimum value, and the rate of false discoveries skyrocketed.

Replication Isn’t Enough

But wait, what about replication? In general, replication is not a sufficient measure to prevent rampant false discovery. For one thing, many hypotheses are wrong, and so many replications may be necessary to ascertain their veracity [3] (here’s an interactive game we made to illustrate this point). But let’s put that aside for now. Replication surely helps to identify faulty results. Might incentives to replicate, and punishment for producing non-reproducible results, curb the natural selection of bad science?

Our model indicates that they won’t.

We gave labs the opportunity to replicate previously published studies, and let all such efforts be publishable (and be worth half as much “fitness” as the publication of a novel result). For the lab that published the original study, a successful replication boosted its value, but a failed replication was extremely punitive. In other words, we created a situation that was very favorable to replication. We found that even when the rate of replication was extremely high – as high as 50% of all studies conducted – the decline of effort and the rise of false discoveries were slowed, but not stopped. The reason is that even though labs with low effort were more likely to have a failed replication, and hence less likely to “reproduce” their methods, not all studies by low-effort labs were false, and among those that were, not all of them were caught with a failed replication. Even when the average fitness of high-effort labs was higher than that of low-effort labs, the fittest labs were always those exerting low effort.  

Moving Forward

Science is hard. It’s messy and time-consuming and doesn’t always (or even often) yield major revelations about the secrets of the universe. That’s OK. We do it because it’s absolutely amazing to discover new truths about the world, and also because the knowledge we gain is occasionally quite useful. Being a professional scientist is a nice job if you can get it, and the competition is stiff. Unfortunately, that means that not everyone who wants to be a scientist can get a job doing so, and not every scientist can get funding to carry out the project of their dreams. Some will succeed, and others will fail.

Mechanisms to assess research quality are essential. Problems occur when those mechanisms are related to simple quantitative metrics, because those are usually subject to exploitation. This is true whether we’re talking about the number of publications, journal impact factors, or other “alt metrics.” When a measure becomes a target, it ceases to be a good measure

This idea is often understood in the sense that savvy operators will respond to incentives directly, by changing their behaviors to increase their performance on the relevant measures. This surely happens. But a cultural evolutionary perspective reveals that quantitative incentives are problematic even if individuals are motivated to disregard those incentives. If the system rewards those who maximize these metrics, whether they do so intentionally or not, the practices of those individuals will spread.

This means that it’s not enough to simply look at bad practices and say “Well, I don’t do that, so I’m fine.” We need to look at the institutional incentives – the factors that influence hiring, promotion, and funding decisions – and make sure they are rewarding the kinds of practices we want to spread.

Exactly what those practices are is open to debate. But they will involve rewarding quality research over flashy results. I think recent trends toward open science and reproducibility are good signs that there is widespread motivation to solve this problem, and progress is being made. I also suspect it will take time to fully effect the kind of changes we need.  Such changes need to come from early career scientists, who are in a position to set new standards for the generation and testing of hypotheses.

In his 1974 commencement address at Cal Tech, Richard Feynman characterized the problem quite clearly, illustrating that it was persistent then as it is today:

It is very dangerous… to teach students only how to get certain results, rather than how to do an experiment with scientific integrity. … I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom. 

May we all someday have that freedom. 


Paul E. Smaldino is Assistant Professor of Cognitive and Information Sciences at the University of California, Merced. Website: http://www.smaldino.com/wp

References:

[1] Smaldino PE, McElreath R (2016) The natural selection of bad science. arXiv:1605.09511

[2] Franco A, Malhotra, N, Simonovits G (2014) Publication bias in the social sciences: Unlocking the drawer. Science 345: 1502–1505.

[3] McElreath R, Smaldino PE (2015) Replication, communication, and the population dynamics of scientific discovery. PLOS ONE 10(8): e0136088.

[4] Wasserstein RL, Lazar NA (2016)  ASA’s statement on p-values: Context, process, and purposeAmerican Statistician 70(2): 129–133.

[5] Simmons JP, Nelson LD, Simonsohn U (2011) False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science 22: 1359–1366.

[6] Gelman A, Loken E (2014) The statistical crisis in science. American Scientist 102(6): 460–465.

[7] Meehl PE (1967)  Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science 34: 103–115.

[8] Cohen J (1962) The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology 65(3): 145–153.

[9] That is, it is based on ideas from well-supported theories of cultural evolution. For an introduction, see books by Robert Boyd & Peter Richerson, Alex Mesoudi, and Joe Henrich.

[10] Sedlmeier P, Gigerenzer G (1989) Do studies of statistical power have an effect on the power of studies? Psychological Bulletin 105(2): 309–316.

[11] Rossi JS (1990) Statistical power of psychological research: What have we gained in 20 years? Journal of Consulting and Clinical Psychology 58(5): 646–656.

Curiosity About Religion Is Viewed as Morally Virtuous, New Research Finds

People from diverse religious backgrounds in the United States view curiosity about religion as morally virtuous, according to new research published in Social Psychological and Personality Science. Atheists also view this curiosity as moral, although less moral than a lack of religious curiosity.

Previous research has examined what makes people curious and how curiosity helps people learn new information, but psychologists know less about how displaying curiosity is viewed by other people. The current research finds that people look favorably on those who show curiosity about religion and science.

"People who display curiosity—about religion or science—are viewed as possessing other moral character traits," says lead author Cindel White, of York University. "We found that observers perceive curious people as willing to put in effort to succeed in life, and observers perceive putting in effort to learn as morally virtuous."

Dr. White and her co-authors asked 1,891 participants to make moral judgments about people who exhibited curiosity, possessed relevant knowledge, or lacked both curiosity and knowledge about religion and science. Participants attributed greater moral goodness to those who displayed curiosity, a trend which was consistent across Jewish, Protestant, Catholic, and other Christian participants.

"Religious people in the United States can be perceived as, or associated with movements that are, anti-science and dogmatically unquestioning of religious doctrines," Dr. White says. "However, religious participants that we surveyed typically approved of asking questions about science, one's own religion, and other people's religions, indicating general approval of people who desire to learn more about religious and scientific questions."

Dr. White notes that the researchers measured observers' perceptions of people who are curious, not what predicts curiosity or how people's levels of curiosity are associated with their actual levels of effort or moral character. The current research also focuses on participants in the United States—White would like to see future studies involve people in a wider array of countries.

In other studies, Dr. White and her colleagues are testing how children between five and eight years old evaluate curiosity about religion and science. The team is finding that young children also positively evaluate and reward curiosity, but more research in this area is needed in order to understand the factors at play in this phenomenon.

 "There are likely to be certain questions of inquiry, cultural contexts, or settings of intergroup conflict where curiosity signals negative traits, such as disloyalty to one's ingroup," Dr. White explains.

--

Press may request an embargoed copy of this article by contacting [email protected].

Study: White, Cindel J.M.; Mosley, Ariel J.; Heiphetz Solomon, Larisa. Adults Show Positive Moral Evaluations of Curiosity About Religion. Social Psychological and Personality Science.

Welcoming the New Character & Context Editorial Team

As of July 1, a new editorial team has taken the reins of SPSP's flagship blog—Character & Context. SPSP is excited to welcome new Editor-in-Chief Jennifer Crocker and her Associate Editors, Andy Luttrell and Julie Garcia!

Character & Context explores the latest findings from research in personality and social psychology. The blog's topics span the full range of human experience, such as aggression, romantic attraction, prejudice, emotions, morality, persuasion, friendship, helping, conformity, decision-making, and group interaction, to name just a few. The editorial team will be reviewing and refining new submissions, guiding authors through the editing process.

We would also like to thank outgoing Editor-in-Chief Judith Hall and her Associate Editors H. Colleen Sinclair and Leah Dickens for dedicating their time and energy to ensuring the success of Character & Context. SPSP is excited to build on that momentum with the new editorial team.

Meet the new editorial team below and please reach out to [email protected] or Jennifer Crocker at [email protected]. If you are interested in submitting a post for Character & Context, please contact Dr. Crocker directly at the email address listed above.

Editor-in-Chief

Jennifer Crocker

Jennifer Crocker headshotJennifer Crocker is a Professor and Ohio Eminent Scholar in Social Psychology, Emerita, at the Ohio State University. Dr. Crocker has made seminal contributions in two distinct research fields within social psychology: Social Stigma and work on Self and Identity. These areas are linked by Dr. Crocker's focus on how people strive to gain and maintain self-esteem and the ensuing consequences of these strivings. Currently, Dr. Crocker studies self-esteem, contingencies of self-worth, and the costs of pursuing self-esteem as a goal.

Dr. Crocker has served in a variety of leadership roles at SPSP, including her tenure on the Executive Committee and her term as president of the Society. She has also served as president of the Society for the Psychological Study of Social Issues (SPSSI), the International Society for Self and Identity, and Divisions 8 and 9 of APA. Dr. Crocker is also highlighted on SPSP's Heritage Wall of Fame, which honors those who have made a significant impact in personality and social psychology.

Associate Editors

Julie Garcia

Julia Garcia headshotJulie Garcia is a Professor in the Psychology and Child Development Department, and Faculty Director of Program Improvement in Academic Programs and Planning at California Polytechnic State University, San Luis Obispo. Her research focuses on the situational cues that inform social identity meanings, and how people cope when these cues suggest possible devaluation. As a whole, Dr. Garcia's research aims to improve the lives of others by finding solutions that could improve intergroup dynamics, enhance representation in STEM, and foster adaptive negotiation between multiple social identities.

In addition to her role on the Character & Context Editorial team, Dr. Garcia serves on SPSP's Board of Directors as Member at Large, Outreach and Advocacy.

Andy Luttrell

Andy Luttrell headshotAndy Luttrell is an Associate Professor of Psychological Science at Ball State University. His research centers on people's opinions, including when and how those attitudes change. Dr. Luttrell is especially interested in what happens when people moralize their attitudes and how moral persuasive rhetoric can sometimes be compelling and sometimes backfire. He also studies the feeling of ambivalence and the stability of people's opinions over time. This research has looked at many different opinions, including attitudes toward social, environmental, political, and consumer issues.

Dr. Luttrell is also the host of Opinion Science, a podcast that explores the science behind our opinions, where they come from, and how they change.

--

Please join SPSP in welcoming the new editorial team of Character & Context! SPSP looks forward to working with Drs. Crocker, Garcia, and Luttrell to raise awareness of new and compelling research in personality and social psychology.

 

Laura Van Berkel

Laura Van Berkel is a Behavioral Economist at the U.S. Agency for International Development (USAID). She advises and supports the agency on the use of behavioral science and experimental methods in USAID programming. She earned her PhD in social psychology from the University of Kansas and BA in psychology from Saint Louis University. Dr. Van Berkel's previous experience also includes the University of Cologne, the National Science Foundation, and Democracy International, Inc.
 

What do you appreciate about SPSP?

I appreciate that SPSP has been responsive to the growing demand for information about nonacademic career paths, including professional development panels with professionals in for-profit, nonprofit, and government sectors at the annual convention. These panels helped me explore potential career paths while I was a graduate student and have only improved over time!
 

Do you have any advice for individuals who wish to pursue a similar career path in social psychology?

Apply for the APA Executive Branch Science Fellowship or the AAAS Science and Technology Policy Fellowship! Both are fantastic opportunities to explore careers applying psychological science to federal policy while gaining access to professional development opportunities and a supportive network of fellow scientists turned policy wonks. There are a growing number of state-level science-policy fellowships in the U.S. as well (e.g., California, Missouri, New Jersey, etc.).

Beyond fellowships, it is very important to make connections and network. It is advice nobody wants to hear, but networks really are so important for finding new opportunities and exploring your career options. Reach out to people with your "dream job" for informational interviews—you may be surprised at how many people are willing to share their experiences and advice!
 

How has your identity affected your career?

Gender and politics are inextricably linked for me. Some of my earliest political memories are of the Clinton impeachment trial and (often hostile) media coverage of Hillary Clinton as a First Lady and political figure. At the same time, my mom highly valued and prioritized civic engagement growing up. I went to the polls with her every Election Day and she even allowed me to punch her choices for her (pre-"hanging chad" days). She instilled in me a great sense of political agency that contrasted with the backlash, (threats of) violence, gender stereotypes, and other barriers women face to full and equal political participation and representation. I am passionate about increasing women's political power and that has become an important part of my career goals—first through my research in academia and now in my current work in international development.
 

What are you most proud of in your career?

I'm most proud of the small part I've played to advance women's rights. This has included leading the creation of a global survey on women's political participation and leadership (including perceived norms, gender roles, and behaviors) to inform future programming and supporting local organizations in their work to prevent and respond to gender-based violence using insights from behavioral science.


What career path would you have chosen if you had decided not to pursue psychology?

I think I'm actually living my "alternative" career path in international development. I didn't have this career path in mind when I pursued psychology as an undergraduate or graduate student and really lucked into being in the right place at the right time to find a career path that is actually ideal for my interests. If I never pursued psychology at all to be on my current winding path, I might have stuck with my early interests in photojournalism or marketing.
 

Outside of psychology, how do you like to spend your free time?

I enjoy spending time with my son and husband, traveling, practicing embroidery, and exploring the many museums, parks, and restaurants around Washington, DC.

 

Science from a Distance

Ever wondered what it is that scientists actually do day in and day out? What impact science really has in the real world, right here and right now? Surely for some people, science can seem hopelessly abstract, unclear, and difficult to relate to one's own everyday life. Our new research shows that holding such perceptions of science as distant is not uncommon, and can shape acceptance of scientific facts.

Even though 9 out of 10 people globally say they generally trust science, getting people to agree on the science is difficult when the science in question has implications that clash with someone's worldview or identity. Four in 10 people even explicitly state that personal beliefs take precedence over scientific facts by agreeing with the statement "I only believe science that aligns with my personal beliefs." This is evident in public divisions around the most important issues facing our societies, like fighting climate change and (future) pandemics.

Personal beliefs that hinder science acceptance differ a lot depending on the domain. Religious people find evolution difficult to accept because it implies that humans were not created by a deity. Political conservatives, especially in the United States, do not trust climate science because it clashes with messages coming from their political representatives. Spiritual people are more distrusting of vaccines. Overall, science rejection has so far been linked to diverse predictors, without much overlap between domains. This is what we focused on in our studies.

Is There a Common Thread in Science Rejection Across Domains?

In our new research, we set out to find it. We predicted that people for whom science feels more psychologically distant—meaning an unclear process with no direct relevance to one's life—will report attitudes and behaviors that are less in line with scientific evidence. Vice versa, when people perceive science as closer, they would accept scientific findings to a greater extent. 

We first developed a questionnaire to measure perceptions of distance to science, which we called the Psychological Distance to Science Scale. We asked people 1) to what extent they regard scientists as similar to themselves (social distance); 2) if they perceive science as present in their local community (spatial distance), and 3) in the present time (temporal distance). Finally, we asked 4) if they perceive science to be applicable and impactful in their everyday life (hypothetical distance).

Then, in several studies, we found that people who perceive science as distant from themselves and the real world reported less acceptance of climate change, vaccination, genetic modification, and evolution. This remained true even after we took into account other factors which are implicated in science rejection, such as political, religious, and conspiracy beliefs, as well as the person's actual science knowledge and their general negative perceptions of science as corrupt or flawed. Thus, feeling personally distant from science stood out as a predictor of science rejection across domains.

It is worth noting that, when looking into different aspects of distance perceptions, perceiving science as not having many practical implications and effects on the real world (that is, hypothetical distance to science) was consistently related to higher science rejection across all domains, suggesting that acknowledging and emphasizing the presence of science in many aspects of life could be key for science acceptance across various topics.

Finally, we wanted to know whether Psychological Distance to Science matters for a person's actual behavior related to science, not just their opinions. We chose getting vaccinated against COVID-19 as a consequential behavior to examine. In November 2021, we asked people who had participated in our studies in early 2021 whether they were vaccinated. We again took into account worldviews and their actual science knowledge, and found that higher psychological distance to science, reported several months before it was possible to receive a vaccine, was related to a lower likelihood of being fully vaccinated.

We think that practical good can come from this research. That is because we found a common contributor to science rejection, beyond diverse personal beliefs that are difficult to change. Getting people on the science side of the debate across polarizing topics, such as climate change and vaccination, may depend on changing how people see science relative to themselves and their own lives. We are currently in full swing to figure out how to best implement these insights into science communication in order to bring science closer to the people!


For Further Reading

Većkalov, B., Zarzeczna, N., McPhetres, J., van Harreveld, F., & Rutjens, B. T. (2022). Psychological Distance to Science as a predictor of science skepticism across domains. Personality and Social Psychology Bulletin. https://doi.org/10.1177/01461672221118184

Rutjens, B. T., Sengupta, N., der Lee, R. van, van Koningsbruggen, G. M., Martens, J. P., Rabelo, A., & Sutton, R. M. (2022). Science skepticism across 24 countries. Social Psychological and Personality Science13(1), 102–117. https://doi.org/10.1177/19485506211001329
 

Bojana Većkalov is a PhD Candidate at the Social Psychology group at the University of Amsterdam, where she is investigating science rejection and ways to counter it. She is broadly interested in the structure, antecedents, and consequences of belief systems.

Natalia Zarzeczna is a post-doctoral researcher at the Social Psychology group of the University of Amsterdam. She is interested in belief systems, stereotypes, and prejudice.

Bastiaan T. Rutjens is an Assistant Professor of Social Psychology at the University of Amsterdam. His research interests are in social and cultural psychology, within which he focuses on the psychology of belief systems and worldviews.

APA Advocacy Activities and Resources

SPSP asked Craig Fisher, the Senior Legislative and Federal Affairs Officer at the American Psychological Association, how APA has been responding to the current political administration and budget. He provided the below resources and information, which we thought would be of interest to SPSP members:


Resources to Advocate for Science:

Recent health care legislation and budget-related advocacy:

APA Science Directorate General Resources:

  • The APA Science Government Relations 2016 Annual Report highlights the activities of the government relations team from last year
  • The APA Psychological Science Agenda is the monthly e-newsletter of the Science Directorate
  • The APA Science Advocacy Blog covers the budget, appropriations, and other issues relevant to federal funding for psychological science
  • The APA Science Advocacy Toolkit provides information on advocacy for psychological science
  • The Federal Action Network provides federal updates and action alerts
  • The APA Science Directorate is on Twitter at @APAScience

We hope some of these resources will prove informative and helpful to you.