The Opposite of Scientific: Why People Sometimes Prefer Untestable Beliefs
By Justin Friesen
Science is built on testability, but we find that for some personally important beliefs, such as about God’s existence or a favored politician’s performance, it might be more psychologically useful to hold beliefs that are not testable. Our research offers new insights into how people deal with facts, and offers the intriguing, if not scary, notion that sometimes people don’t want their belief systems to be accountable to facts.
Of course, sometimes people do want to hold accurate beliefs. However, a century of research shows that certain cherished beliefs are held in part because they make people feel good about themselves or their groups, or can even make people feel symbolically immortal. Past research on selective skepticism has found that people deny individual facts they don’t like and discredit information sources they don’t like. We propose and test for a new way by which people protect their beliefs—by diminishing the importance of facts altogether and making their beliefs less testable.
Most scientists probably hoped that as it became easier to scientifically test and empirically analyze issues from physics to psychology to policy, that bias might decline in the face of facts. However, we find that human bias may have an ace up its sleeve, for people may actually remove the role of scientific testability from their belief system, rendering facts useless. If scientists fail to further understand how the untestability bias operates in the human mind, the power of science to change human knowledge may often go to waste.
We conducted four experiments (with 575 participants) on religion and politics, two domains with strong psychological significance. First, we found that when highly religious participants were told that belief in God will always be untestable, they reported stronger religious beliefs afterwards, relative to religious participants who were told that one day science might be able to investigate religious ideas.
Next we turned to politics. Testability in politics is something people don’t reflexively think about. So we postulated that highlighting the testability of certain criteria for political success (e.g. job creation rates) might reduce the extremity of political beliefs. We told an online sample of American adults that President Obama’s performance on certain issues such as job creation could be looked up online at any moment. When people were told this, they were less extreme in their opinions of the president’s current performance: His opponents tempered their criticism and, to a lesser extent, his supporters toned down their praise.
Together, these two experiments suggest that unfalsifiable beliefs might be expressed in more extreme ways, or that testability reins people in.
Next, we tested whether people might actually shape their beliefs to be more unfalsifiable as a type of defensive strategy. First, we randomly assigned religious participants to read a passage critical of religious beliefs—that the discovery of the Higgs Boson particle eliminates society’s need for God—and other religious participants to read a non-critical passage. Next we asked them to rate the importance of different reasons for religious belief. Participants who had been criticized emphasized the more untestable reasons for believing in God (e.g., that God is necessary to live a moral life) relative to the more testable reasons (e.g., archeological or scientific evidence).
Finally, we recruited people who supported or opposed same-sex marriage and showed them a passage that threatened or affirmed their position. People on both sides of this debate reported that the rightness of same-sex marriage was more unfalsifiable—a question of moral opinion instead of testable facts—after reading the passage that was unsupportive of their beliefs. In other words, people might be removing the relevance of facts from public discourse altogether, in order to preserve their beliefs.
The motivated use of untestability may be present in the debate around the Affordable Care Act. For example, writing for New York Magazine, Jonathan Chait argued that initial arguments against the ACA were based around testable propositions. For example: It’s only signing up people who already have insurance, it’s not reducing the number of uninsured, and premiums are going to skyrocket. Data suggest that all three of those predictions were incorrect.
Have major opponents to the ACA changed their mind? It doesn’t seem so. Instead, Chait suggests that “conservative objections to Obamacare are finally turning from the practical to the philosophical.” These are arguments like, “We shouldn’t use other people’s money on health insurance.” We’d say their arguments have shifted towards being unfalsifiable.
In one sense, making an unfalsifiable statement is a potentially safer political strategy because you can’t disprove “Obamacare is immoral”. The shift to more untestable objections means that opponents of the ACA can continue to cling to their beliefs without worrying about any pesky facts cropping up again in the future.
Our studies don’t try to say whether Obamacare is a beneficial law or not. However, as psychological researchers we can say the discussion around Obamacare and many other issues (e.g., college admissions, vaccinations) often involve the selective use of untestability and this may be a telling example of how, in part, polarization perpetuates. It’s one thing to say that someone else’s facts are wrong, but it’s wholly another thing to say that facts are irrelevant altogether, or to at least imply so through an argument free of testability.
Understanding how untestability plays out in politics, personal interactions, and within people’s heads is potentially an important task for future psychological research. It’s a great feeling to know that no one can prove you wrong about something you care about, but it might also lead to the stagnation of knowledge—if we’ve already marginalized the new facts that science can provide.
Justin Friesen, Ph.D., (University of Waterloo) is a postdoctoral associate at York University. He researches psychological mechanisms of why and how people support inequality, ideologies, and intergroup differences.
Troy Campbell is a Ph.D. candidate at Duke University. He focuses on how identity and beliefs are at center of people’s consumption experiences, social interactions, and reactions to marketing.