Instead of having to endure dinner conversations with cantankerous family members whose political views we detest, online we can choose to “follow,” “friend,” and “like” only those people with whom we agree. A great deal of research has shown that these kinds of choices can skew our online environments in ways that reinforce and entrench our existing beliefs. But what happens when that pesky uncle you try to avoid at Thanksgiving starts commenting on your Twitter posts? There is much less research on what happens when we do encounter information and people with whom we disagree online. That’s what our research sought to investigate.

Many social media platforms offer tools that allow people to “hide” and “delete” other people’s comments from their feeds. When that’s not enough, people can even choose to “unfriend,” “mute,” or “block” people, preventing unwanted interactions entirely. These tools can certainly be useful for dealing with harassment and trolls. (If only there was a similar mute button at Thanksgiving dinner!). However, such tools can also be used to insulate people from legitimate opposing viewpoints. In our study, we examined whether people disproportionately censor comments that go against their views on a political topic, even when the content of those comments is not offensive.

We asked people to indicate whether they were pro-choice or pro-life and the extent to which their position on abortion was an integral part of their identity. Two weeks later, in a seemingly unrelated task, we asked the same participants to help us moderate comments from an online blog run by the university. Their job was to flag inappropriate comments for removal from the blog. Unbeknownst to the participants, the blog was not real, and we wrote all of the comments, modeling them on real Facebook and Reddit comments.

We found that pro-choice people were more likely to recommend pro-life comments for deletion than pro-choice comments. Likewise, pro-life people were more likely to recommend pro-choice comments for deletion than pro-life comments. This selective censoring of comments that were contrary to the participant’s personal viewpoint occurred even when the comments had been evaluated as inoffensive by neutral third-party raters.

Some people were more likely to selectively censor opposing comments—but not who you might think! We did not find consistent evidence of more biased censoring among conservatives—or liberals. Instead, what amplified the likelihood that someone would censor opposing comments was the extent to which the issue was an integral part of their identity. Participants who felt their identity was “fused” with their position on abortion were most likely to selectively censor opposing comments.

These effects were not limited to the topic of abortion. The same pattern emerged in a follow-up study in which we focused on gun rights. Gun control supporters were more likely to censor pro-gun comments, while gun rights supporters were more likely to censor anti-gun comments.

We still have much to learn about how these processes play out in real life. How frequently do people spontaneously censor content when they are not explicitly asked to moderate others’ comments? To what extent do social media algorithms learn and amplify the biases users express when they selectively censor opposition? And what are the downstream consequences of censoring, not just for the person doing the censoring but also for society as a whole?

What we do know from this research is that people are not passive participants who consume whatever they may encounter in their media diets. Rather, people actively curate their online environments by censoring information that opposes their viewpoints.


For Further Reading

Ashokkumar, A., Talaifar, S., Fraser, W. T., Landabur, R., Buhrmester, M., Gómez, Á., ... & Swann Jr, W. B. (2020). Censoring political opposition online: Who does it and why. Journal of Experimental Social Psychology, 91, 104031. https://doi.org/10.1016/j.jesp.2020.104031 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7415017/

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132. https://doi.org/10.1126/science.aaa1160

Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A., & Bonneau, R. (2015). Tweeting from left to right: Is online political communication more than an echo chamber? Psychological Science, 26(10), 1531-1542. https://doi.org/10.1177/0956797615594620

Iyengar, S., & Hahn, K. S. (2009). Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication, 59(1), 19-39. https://doi.org/10.1111/j.1460-2466.2008.01402.x
 

Sanaz Talaifar is a postdoctoral scholar at Stanford University in the Graduate School of Business. Her research interests include self and identity, power and politics, and digital media.

Ashwini Ashokkumar is a doctoral student in social and personality psychology at the University of Texas at Austin. Her research interests include group processes, social dynamics, and language analysis.

Bill Swann is the William Howard Beasley Professor of Management and Professor of Psychology at the University of Texas at Austin. His research focuses on identity, relationships, and group processes, including questions such as why people engage in extreme behaviors such as terrorism.