Hidden but Widespread Gender Biases Emerge in Millions of Words


Language pervades every aspect of our daily lives. From the books we read to the TV shows we watch to the conversations we strike up on the bus home, we rely on words to communicate and share information about the world around us. Not only do we use language to share simple facts and pleasantries, we also use language to communicate social stereotypes, that is, the associations between groups (for example, men/women) and their traits or attributes (such as competence/incompetence). As a result, studying patterns of language can provide the key to unlocking how social stereotypes become shared, widespread, and pervasive in society.

But the task of looking at stereotypes in language is not as straightforward as it might initially seem. Especially today, it is rare that we would hear or read an obviously and explicitly biased statement about a social group. And yet, even seemingly innocuous phrases such as “get mommy from the kitchen” or “daddy is late at work” connote stereotypes about the roles and traits that we expect of social groups. Thus, if we dig a little deeper into the relatively hidden patterns of language, we can uncover the ways that our culture may still represent groups in biased ways.

Using Computer Science to Uncover Hidden Biases

Recent advances in computer science methods (specifically, the area of Natural Language Processing) have shown the promise of word embeddings as a potential tool to uncover hidden biases in language. Briefly, the idea behind word embeddings is that all word meaning can be represented as a “cloud” of meanings in which every word is placed according to its meaning. We place a given word (let’s say “kitchen”) in that cloud of meaning by looking at the words it co-occurs with in similar contexts (in this case, it might be “cook,” “pantry,” “mommy,” and so on). If we have millions to billions of words to analyze, we eventually arrive at an accurate picture of word meaning where words that are close in meaning (like “kitchen” and “pantry”) will be placed close together in the cloud of meaning. Once we’ve achieved that, we can then answer even more detailed questions such as whether “mommy” is placed as closer in meaning to “kitchen” or to “work.”

Using these and other tools, my colleagues and I saw the potential to provide some of the first systematic insights into a long-standing question of the social sciences: just how widespread are gender stereotypes really? Are these stereotypes truly “collective” in the sense of being present across all types of language, from conversations to books to TV shows and movies? Are stereotypes “collective” in pervading not only adults’ language but also sneaking into the very early language environments of children? Although evidence for such biases has long been documented by scholars, our computer science tools allowed us to quantify the biases at a larger scale than ever before.

To study stereotype pervasiveness, we first created word embeddings from texts across seven different sources that were produced for adults or children including classic books (from the early 1900s), everyday conversations between parents and children or between two adults (recorded around the 1990s), and contemporary TV and movie transcripts (from the 2000s), ultimately totaling over 65 million words. Next, we examined the consistency and strength of gender stereotypes across these seven very different sources of language. In our first study, we tested a small set of four gender stereotypes that have been well-studied in previous work and thus might reasonably be expected to emerge in our data. These were the stereotypes associating:

  • men-work/women-home
  • men-science/women-arts
  • men-math/women-reading
  • men-bad/women-good

Stereotypes Really Are Everywhere In Our Language

Even though our seven kinds of texts differed in many ways, we found pervasive evidence for the presence of gender stereotypes. All four gender stereotypes were strong and significant. Moreover, there were no notable differences across child versus adult language, across domains of stereotypes, or even across older texts versus newer texts. To us, this consistency was especially remarkable in showing that even speech produced by children (as young as 3 years old!) and speech from parents to those young children revealed the presence of gender stereotypes that have not been documented on such a big scale at such young ages.

Having shown pervasiveness for these four well-studied stereotype topics, we next turned to gender stereotypes for more than 600 traits and 300 occupation labels. Here, we found that 76% of traits and 79% of occupations revealed meaningful associations with one gender over another, although not all were large in magnitude. The strength of gender stereotypes of occupations was stronger in older texts versus newer texts; and the strength of gender stereotypes of traits was stronger in adult texts versus child texts. And yet, we also saw continued evidence of consistency. For instance, across most of our seven different kinds of texts the occupations “nurse,” “maid,” and “teacher” were stereotyped as female, while “pilot,” “guard,” and “excavator” were stereotyped as male.

By bringing together both the unprecedented availability of massive amounts of archived naturalistic texts, and the rapid advances in computer science algorithms to systematically analyze those texts, we have shown undeniable evidence that gender stereotypes are indeed truly “collective” representations. Stereotypes are widely expressed across different language formats, age groups, and time periods. More than any individual finding, however, this work stands as a signal of the vast possibilities that lie ahead for using language to uncover the ways that biases are widely embedded in our social world.


For Further Reading

Charlesworth, T. E. S., Yang, V., Mann, T. C., Kurdi, B., & Banaji, M. R. (2021). Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words. Psychological Science, 32(2), 218–240. https://doi.org/10.1177/0956797620963619

Caliskan, A., Bryson, J. J., & Narayanan, A. (2016). Semantics derived automatically from language corpora necessarily contain human biases. Science, 356(6334), 183–186. https://doi.org/10.1126/science.aal4230


Tessa Charlesworth is a Postdoctoral Research Fellow in the Department of Psychology at Harvard University where she studies the patterns of long-term change in social cognition.

 

4 Tips for Critically Evaluating Data in the Media

A quick Google search will yield millions of results for the importance of critical thinking for students, individuals in the workplace, and, simply, as human beings existing in this world. News articles often make claims based off of and present scientific data (especially now in the wake of COVID-19), but how often do we pause and think critically about the data they’re presenting?

Confirmation bias is a tendency to be more responsive and sensitive to information that aligns with our pre-existing beliefs (Nickerson, 1998). This can lead us to accept claims allegedly driven by data because they already fit into how we see the world. The flip side of this is instantly rejecting claims and reputable data because we disagree with the conclusion given or the conclusion challenges or contradicts our beliefs. So, how can we safeguard ourselves (and our friends and families) from falling into this trap?

Here are some tips for you (and for your family and friends who may be less familiar with interacting with scientific data).

Be wary of causal language in headlines and read the full article

As researchers, we know the ability to make causal assertions in scientific studies is ultimately limited. However, we can still be swayed by flashy headlines; in reading the full article, we may notice that reporters are extrapolating beyond what the researchers found. So, let’s return to the basics: it’s hard to determine causality. Survey studies and observational studies provide us with important information about whether variables are related and lay the foundation for predicting the nature of relationships, but they do not give us information about causality. In order to truly determine cause and effect, researchers need to have tight control over their study to rule out possible alternative explanations for the causal relationship they observe. This tends to involve random assignment of participants to experimental conditions and careful manipulation of one variable at a time. This information cannot be gleaned by a headline alone. We get a better sense of how the data was collected and therefore how much leeway we have in talking about the data by reading the full article.

Spend time looking at the graphs and figures presented

Figures can be misleading. Pay attention to the vertical scale (i.e., the Y-axis) and note whether the scale starts somewhere other than zero, skips numbers, appears stretched out (i.e., too big) or shrunk down (i.e., too small). Changes in the scaling can result in vastly different images. See the example images below (generated from fictitious data). It’s also important to focus on whether the data in the graphs and figures represents raw data, averages, percentages, estimated means, or future predictions. Unless we slow down to think critically about the type of data we’re looking at, we may risk drawing erroneous conclusions.

Bar graph imagesNotice how in the first graph the differences between conservatives, moderates, and liberals seem quite drastic compared to the second graph. However, the only difference between the two graphs is the scaling along the Y-Axis. Also, the second graph indicates that we’re looking at group averages for endorsement of Policy A and that response options ranged from 1-10.

Check your own biases

Social media news feeds are a key news source in today’s world. However, the news articles that appear in social media feeds are a combination of the actions of editors and journalists producing the content and site-driven algorithms that rely upon users’ previous interactions on the site and the activity of the members in their social network (Fletcher & Nielsen, 2019). As a result, we can be more likely to see articles that already align with our worldview. When news articles appear in our feeds, we should ask ourselves a few questions:

  • Who is sharing this information? Whether it’s someone you know who shares your worldview or someone you know whose worldview differs drastically, our feelings about that person and their view can lead to us rushing a judgment about the news article they’ve shared. Slowing down to think about how you feel about this person may allow you to be thoughtful about the article they’re sharing and its conclusions.
     
  • What website does this article come from? Not all news sites are created equally. Pay attention to the article’s host website and whether the piece is an investigative report or an opinion/editorial.
     
  • Does this evoke a strong emotional reaction in me? If so, why? Emotional reactions heavily influence our decision-making (e.g., Kahneman, 2003; Weber & Johnson, 2009). So, the emotions we experience as a result of reading or listening to a news story (either positive ones like pride or negative feelings like anger) may inhibit our ability to evaluate the credibility of the story and the data that’s presented. Pausing to reflect on if and why you’re reacting strongly to a piece may allow you to step back and critique the news and/or data separate from your emotions.

Be thoughtful in the sharing of the information

Before you share news articles or the data you’ve seen, pause and reflect on whether you’ve taken the time to fact-check the information to the best of your ability. If you stand by the conclusions drawn, consider what your intentions are for sharing the information. This can include, for example, trying to educate, attempting to prove a point, or creating an emotional reaction in others. Discerning your motivation for sharing the information may lead you to a different decision or at least prepare you for challenging encounters with family or friends.

Another helpful resource for evaluating the news:  https://guides.library.cornell.edu/evaluate_news


References

Fletcher, R., & Nielsen, R. K. (2019). Generalised scepticism: How people navigate news on social media. Information, Communication & Society, 22(12), 1751–1769. https://doi.org/10.1080/1369118X.2018.1450887

Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58(9), 697–720. https://doi.org/10.1037/0003-066X.58.9.697

Weber, E. U., & Johnson, E. J. (2009). Mindful Judgment and Decision Making. Annual Review of Psychology, 60(1), 53–85. https://doi.org/10.1146/annurev.psych.60.110707.163633

Celebrity Fat Shaming Has Ripple Effects on Women’s Implicit Anti-Fat Attitudes

Washington, DC and Montreal, Quebec - Celebrities, particularly female celebrities, are routinely criticized about their appearance—indeed, celebrity “fat-shaming” is a fairly regular pop-cultural phenomenon.  Although we might assume that these comments are trivial and inconsequential, the effects of these messages can extend well beyond the celebrity target and ripple through the population at large. Comparing 20 instances of celebrity fat-shaming with women’s implicit attitudes about weight before and after the event, psychologists from McGill University found that instances of celebrity fat-shaming were associated with an increase in women’s implicit negative weight-related attitudes. They also found that from 2004 - 2015, implicit weight bias was on the rise more generally.

Explicit attitudes are those that people consciously endorse and, based on other research, are often influenced by concerns about social desirability and presenting oneself in the most positive light. By contrast, implicit attitudes—which were the focus of this investigation—reflect people’s split-second gut-level reactions that something is inherently good or bad.

“These cultural messages appeared to augment women’s gut-level feeling that ‘thin’ is good and ‘fat’ is bad,” says Jennifer Bartz, one of the authors of the study. “These media messages can leave a private trace in peoples’ minds.”

The research is published in Personality and Social Psychology Bulletin, a journal of the Society for Personality and Social Psychology.

Bartz and her colleagues obtained data from Project Implicit of participants who completed the online Weight Implicit Association Test from 2004 to 2015. The team selected 20 celebrity fat-shaming events that were noted in the popular media, including Tyra Banks being shamed for her body in 2007 while wearing a bathing suit on vacation and Kourtney Kardashian being fat-shamed by her husband for not losing her post-pregnancy baby weight quickly enough in 2014.

They analyzed women’s implicit anti-fat attitudes 2 weeks before and 2 weeks after each celebrity fat-shaming event.

Examining the results, the fat-shaming events led to a spike in women’s (N = 93,239) implicit anti-fat attitudes, with more “notorious” events producing greater spikes.

 

Chart showing that implicit anti-fat bias jumped 2 weeks after a celebrity fat-shaming event, and that levels remain higher for another 4 weeks

While the researchers cannot definitively link an increase in implicit weight bias to specific negative incidents in the real world with their data, other research has shown culture’s emphasis on the thin ideal can contribute to eating disorders, which are particularly prevalent among young women.

“Weight bias is recognized as one of the last socially acceptable forms of discrimination; these instances of fat-shaming are fairly widespread not only in celebrity magazines but also on blogs and other forms of social media,” says Amanda Ravary, PhD student and lead author of the study.

The researchers’ next steps include lab research, where they can manipulate exposure to fat-shaming messages (vs. neutral messages) and assess the effect of these messages on women’s implicit anti-fat attitudes. This future research could provide more direct evidence for the causal role of these cultural messages on people’s implicit attitudes.


Citation: Amanda Ravary, Mark W. Baldwin, and Jennifer A. Bartz. Shaping the Body Politic: Mass Media Fat-Shaming Affects Implicit Anti-Fat Attitudes. Personality and Social Psychology Bulletin. Online before print April 15, 2019.

Open Access: The data reported in this paper are available in the Supplemental Materials and archived at the public database Open Science Framework (https://osf.io/iay3x). 

Funding This research was supported by a Fonds de recherche du Québec—Société et culture (FRQSC) Team Grant (FRQ-SC SE-#210323).

Personality and Social Psychology Bulletin (PSPB), published monthly, is an official journal of the Society for Personality and Social Psychology (SPSP). SPSP promotes scientific research that explores how people think, behave, feel, and interact. The Society is the largest organization of social and personality psychologists in the world. Follow us on Twitter, @SPSPnews and find us on facebook.com/SPSP.org.

An Open Letter to NPR's Invisibilia about "The Personality Myth"

A recent podcast from NPR's Invisbilia garnered attention from many current personality psychology researchers.  Below is one of the many responses to the creators of the podcast generated on both Facebook and Twitter.

Dear Invisibilia,

I listened to your podcast, "The Personality Myth." The stories are really touching, and there are many great parts of the podcast. However, the way you describe the state of the science on personality is inaccurate.

I am worried that your listeners are going to walk away with misconceptions about personality. This matters, because people care about whether or not personality should factor into their life decisions.

"That idea [of personality traits] might just be a mirage."

Personality traits are not an illusion, and telling people that they are could make them wonder why their own perceptions and judgment are so untrustworthy. The scientific evidence shows that our perceptions of others - especially of people we know well - are quite accurate, and predict how they will behave and what they will do in life (Connelly & Ones, 2010; Vazire & Mehl, 2008). Personality doesn't "determine" anything - human behavior is way too noisy for it to be determined by anything. But personality predicts, probabilistically, what people will do, about as well as any other predictor (e.g., SES, IQ, etc.; Roberts et al., 2007).

"The point is that ultimately it's the situation, not the person, that determines things."

Again, the facts are inconsistent with this. When compared head to head, situational factors and personality traits are about equally predictive of behavior - both matter (Funder & Ozer, 1983). This is why, when many people are in the same situation, they don't all act the same way.

"To be clear, there are people whose horrible crimes really do emanate from their personalities - psychopaths."

This is going too far in the other direction. There is no behavior, and no person, that is completely determined by personality. Every behavior, and every person, is influenced to some degree by personality and by the situation. The strength of each influence varies from one behavior to the next, but it's never all one or the other. The idea that anything is "all personality" is just as wrong as the idea that it's "all situation."

At the end of your show, you say that it fills you with terror that there may be nothing to hold on to if you take away personality. The good news is, you shouldn't take away personality. There are real, stable, consequential differences between people. And people are more than just their personalities. Both are true. There is plenty to hold on to, and plenty we can change. No need for terror! (Well, not on this front, anyway…)

Best,
Simine Vazire
Associate Professor
Department of Psychology
UC Davis


This open letter reprinted with permission from the author

The original podcast may be heard here.

Psychological Profiles of Movie Monsters

What makes monsters truly frightening? Is it their appearance, their actions, or something deeper within their character?

Monsters found in movies, books, and popular folklore can be terrifying in part because they have dangerous physical characteristics like claws or teeth that cause harm. But beyond physical characteristics, there may be something inside the minds of monsters—their psychological characteristics—that makes them seem especially dangerous.

Psychological characteristics of people (or monsters) are generally perceived along two dimensions: one related to cognitive capacity (how much they can think, reason, and exert their own agency), and another related to emotional capacity (how much they can feel emotions, pleasure, and pain). 

Applying this idea to what makes monsters scary led our research team to an intriguing hypothesis: people fear imbalanced minds. That is, that people might find threatening beings especially frightening when they seem to be mismatched in cognitive and emotional capacities: either high in cognition but low in emotion, or high in emotion but low in cognition.

For example, two famous movie villains, Hannibal Lecter and the Terminator, are each high in cognitive abilities with very little emotion, making them especially chilling foes. On the other hand, Regan MacNeil (the demon-possessed child from The Exorcist) appears extremely high in emotion while lacking any cognitive control at all, making her seem chaotic.  

Could these imbalances between capacity for thinking and feeling help us understand what makes monsters scary?

Imbalanced Minds in Fictional Monsters

We ran four studies to explore the fear of imbalanced minds as an explanation for why some monsters appear scarier than others.

We selected examples of common popular fictional monsters, such as zombies, vampires, and Frankenstein's monster.  Participants rated each on capacities for cognition, emotion, and also how scary they personally found the monsters to be.

Among the most frightening entities were demons and the devil, who scored high in cognition but relatively low in emotion. As an interesting comparison, a person possessed by the devil was also rated highly scary, but tended to have an opposite imbalance (high in emotion but relatively low in cognition).   On the other hand, monsters with low scores in both emotional and cognitive capacities, like the Blob and Frankenstein's monster, were considered to be the least scary monsters.

Werewolves vs. Vampires

Other insights emerged when comparing two iconic monsters of classic films: werewolves and vampires.  On average, werewolves were deemed scarier than vampires. Whereas werewolves were rated relatively high on emotion compared to cognition, vampires tended to be more balanced on both dimensions and scored about the middle in terms of scariness. Again, the difference in scariness reflected a difference in balanced mind.

These findings highlight the importance of the balance between emotion and cognition.  Although greater cognitive and greater emotional capacities of monsters each individually contributed to how scary they were, the sum of these two dimensions (cognition + emotion) was still not as good a predictor of scariness as the imbalance between cognition and emotion.  

Slow vs. Fast Zombies

Another interesting comparison can be found between the "slow" zombies made famous in classic films like Night of the Living Dead, compared to the "fast" zombies popularized in recent films such as 28 Days Later.  Both kinds of zombies were rated as low in emotion compared to other monsters.  But the fast zombies were rated as a little higher in cognition than the slow zombies, and somewhat scarier.

Varying the Imbalanced Minds of Monsters

In a final study, we experimentally varied our descriptions of monsters and their imbalanced minds.  Participants read about a hypothetical virus that transformed normal humans into zombies driven to bite and infect other people. The transformations were described in one of four different ways, corresponding to four levels of either balanced mind (low cognition/ low emotion or high cognition/ high emotion) or imbalanced mind (high cognition/ low emotion or high emotion/ low cognition).

We asked people to report the "minimum tolerable distance," or how far people would want to stay away from these different types of zombies. Not surprisingly, participants wanted to maintain the greatest distance from zombies with imbalanced minds and deemed them to be the scariest zombies, compared to those with balanced minds.

Scores

The final study also shed light on why imbalanced minds elicit greater fear. Observers rated imbalanced zombies as less predictable and controllable than the balanced zombies, and statistical analyses indicated this explained why they seemed scarier.  The imbalance between thinking and feeling makes these agents appear particularly chaotic and uncontrollable, amplifying observers' fear.

Summary

What makes a monster scary? This research reminds us that the fear of the unknown and unpredictable is a potent force in our perception of scariness.  An imbalance between cognition and emotion serves as a cue to a dangerous mind, and plays a crucial role in what makes some monsters truly terrifying.


For Further Reading

Hernandez, I., Ritter, R. S., & Preston, J. L. (2023). Minds of monsters: Scary imbalances between cognition and emotion. Personality and Social Psychology Bulletin, Online first publication available.   https://doi.org/10.1177/01461672231160035


Jesse Preston is Associate Professor of Psychology at the University of Warwick who studies the psychology of belief and judgments of mind in the self and others.

Can the Media Bridge Political Divides?

Outrage over politics and politicians can incentivize media to report with political slants that increase political polarization. After Fox News called the 2020 U.S. presidential election for Joe Biden, Tucker Carlson wrote “I continue to think the company (Fox News) isn’t taking (this) seriously enough. We need to do something to reassure our core audience. They’re our whole business model.” Carlson was suggesting that Fox News should not call the election for the winner (Biden), but should instead play to their viewers’ beliefs that Trump won and that the election was not free and fair.

Many blame (social) media for polarizing people, spreading misinformation, and encouraging extremism, even political violence. But can the media also be a place for bridging political divides? We suggest the media can reduce political polarization by pairing personal experiences and the facts.

Personal Experiences Build Respect Between Political Opponents

Our previous research suggests that partisan respect can result from sharing personal experiences. An example would be “The reason I am pro-gun is due to my own experience of needing to use a gun in self-defense.” Importantly, these experiences are more effective in bridging divides than simply sharing facts, as in “The reason I disagree with you on gun policy is due to statistics in this gun policy report.” However, while experience sharing is a promising strategy, it’s still important to find ways to communicate the facts in a healthy democracy.

How the Media Can Bridge Divides

In our new research, we studied whether one can bridge divides through media coverage. Our goal was not only to explore whether the media can help reduce political polarization, but also whether one can communicate facts by combining them with experiences.

In one of our studies, American participants reported their stance on climate change, and then based on these responses we could determine who their political opponent was. For example,  if someone reported being pro-climate policies, their opponent would be anti-climate policies. Participants read a news article supposedly from USA Today that discussed why a political opponent disagreed with them on climate change policies. Some participants learned the opponent disagreed based on personal experiences. As an example, participants who supported looser coal mining regulations learned the opponent disagreed because “due to looser coal regulations, a nearby coal mine polluted our water. My children drank the polluted water, got sick, and had to go to the hospital.” Other participants read about an opponent who disagreed based on facts. In this case, for example, participants who supported looser coal mining regulations learned the opponent disagreed because they “read in the Environmental Policy Report that due to looser coal regulations, 45% of coal mines are polluting waterways.” A final group of participants had an opponent who disagreed based both on these experiences and the facts.

Reading about someone’s personal experiences did indeed make a difference. Participants were significantly more tolerant and less willing to dehumanize their political opponents when they read about an opponent’s experiences or a combination of these experiences and facts.

We also conducted a similar study focused on social media. American participants reported their stance on gun policy, and based on these answers, we showed them a fictitious Facebook post from someone (that is, an opponent) who disagreed based on experiences, facts, or a combination of both. Again, we found that experiences alone, or combined with facts, were powerful in bridging divides and did so better than facts alone. Our research shows that both news media and social media can bridge divides and that it is possible to communicate the facts while simultaneously reducing partisan animosity.

Cross-Cultural Evidence

We also explored whether persuading people in this way is helpful in a different country—Germany. We found that the most polarized Germans viewed opponents more positively after reading a news article about that person’s experiences (or a combination of experiences and facts), compared to facts alone. However, less polarized Germans were not influenced. In the American studies, we had found that our persuasion method was equally effective for people who were more and less polarized.

Why did these cross-cultural differences emerge? We think it is due to the different political systems. In the United States politics is tribal (“us” versus “them”). This can lead both more and less polarized people to similarly feel cold towards the opposing party–making our persuasion method effective. In Germany, however, there are multiple parties with overlapping affiliations and belief systems—making politics less tribal. Therefore, less polarized Germans have more favorable attitudes towards opponents, making our approach less helpful. But for more polarized Germans (who dislike opponents more), hearing the experiences (or the experiences and facts) of political opponents is beneficial. Importantly, across all studies—in both the United States and Germany—this persuasion method was similarly helpful for both liberals and conservatives.

Media as a Context for Change

Our research suggests one should rethink the role of media in politics. Presently, the media can be blamed for making polarization worse, but the media can reduce polarization—with the right strategies. Having journalists report on people’s actual experiences that connect to divisive issues, or sharing your own personal experiences on social media, reduces political division. Importantly, the media can combine experiences with the facts to foster constructive partisan relationships and a healthier democracy.


For Further Reading

Kubin, E., Gray, K., & von Sikorski, C. (2023). Reducing political dehumanization by pairing facts with personal experiences. Political Psychology. https://doi.org/10.1111/pops.12875

Kubin, E., & von Sikorski, C. (2021). The role of (social) media in political polarization: A systematic review. Annals of the International Communication Association, 45(3), 188-206. https://doi.org/10.1080/23808985.2021.1976070

Kubin, E., Puryear, C., Schein, C., & Gray, K. (2021). Personal experiences bridge moral and political divides better than facts. Proceedings of the National Academy of Sciences, 118(6), 1-9. https://doi.org/10.1073/pnas.2008389118.

Emily Kubin is a post-doctoral researcher at RPTU Kaiserslautern-Landau and the University of North Carolina at Chapel Hill. She has published work on political polarization, bridging political divides, and the media.

Kurt Gray is a Professor of Psychology and Neuroscience at University of North Carolina at Chapel Hill where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He has published work on morality and bridging political divides.

Christian von Sikorski is an Assistant Professor of Political Psychology in the Institute for Communication Psychology & Media Education at RPTU Kaiserslautern-Landau. He has published work on how the media affects people’s attitudes and beliefs about society and one another.

Are Healthy and Unhealthy Foods Portrayed Differently in Top American Movies?

"No, I don't like green food. Have they got any chips?" "I did find bacon, which is about the most fantastic thing in history." These are just a few lines from characters in top-grossing Hollywood movies talking about food. Whether you've noticed it or not, food is everywhere in movies and past research suggests that movies tend to depict unhealthy foods more frequently than healthy foods.

But beyond just understanding what foods appear in movies, my colleagues and I wondered how healthy is the food that movie characters actually consume on screen? How do characters evaluate healthy and unhealthy foods? Do different types of characters tend to be associated with healthy or unhealthy food? And are these foods shown in different types of settings?

To answer these questions, we analyzed over 9000 food items that appeared in 244 top-grossing movies released between 1994 and 2018. For each food item, we recorded whether characters ever consumed or evaluated the food, the traits of these characters, and the setting the food was shown in—for example, in American versus foreign contexts or social versus nonsocial situations. Each food item was also assigned a Nutrient Profile Index score, a validated nutritional rating system which awards or subtracts points based on the food's nutritional composition. This way, we could see whether the healthiness of food varied based on whether characters ate or evaluated the food and the context the food appeared in.

In these movies, foods that characters actually consumed on screen were less healthy than foods that characters didn't consume. Furthermore, foods that characters evaluated positively were less healthy than foods that characters evaluated negatively. A positive evaluation like, "Try one of these, they are divine!" was more likely to be used to describe unhealthy food whereas a negative evaluation like, "What are you serving here? This stuff is nasty," was more likely to be used to describe healthy food.

We also found that foods depicted in American settings and in social situations were less healthy than foods shown in non-American and non-social contexts. Finally, we found that child characters (compared to adult characters) consumed more unhealthy foods, and evaluated unhealthy foods more positively, and healthy foods more negatively. There were no differences in the healthiness of food that was consumed or evaluated based on characters' gender or race and ethnicity. We did note, however, that there was a relatively low number of ethnically diverse characters interacting with food in the movies. For example, only three Asian characters ever evaluated a food item across all the movies we analyzed.

The negative portrayal of healthy food in movies may seem unsurprising, given that healthy foods tend to have a bad reputation in American culture. However, if Americans already hold negative attitudes towards healthy foods, then movies may only reinforce these beliefs and make them stronger and more normalized. But could movies help shape more positive norms and attitudes about healthy foods? Just as movies tend to show less smoking today than they did several decades ago, perhaps these depictions of healthy foods as less enjoyable, less American, and less social could one day be seen as a relic of the past.


For Further Reading

Turnwald, B. P., Horii, R. I., Markus, H. R., & Crum, A. J. (2022). Psychosocial context and food healthiness in top-grossing American films. Health Psychology, 41(12), 928–937. https://doi.org/10.1037/hea0001215. A version of this piece was previously published in the APA Journals Article Spotlight.


Rina Horii is a graduate student studying social psychology at the University of Minnesota Twin Cities. She is interested in the psychology of food and eating, with a specific interest in culturally-based beliefs about food.  

Critical Thinking Protects Ukrainians from the Kremlin’s Disinformation War

In parallel with its military invasion of Ukraine, the Russian government is waging a disinformation war. For example, Russia claimed its bombing of a hospital in Mariupol was staged, and that the news stories about innocent civilians murdered in Bucha were a diversion by the West to distract from the U.S. establishing biolabs in Ukraine. The dissemination of falsehoods by the Kremlin in Ukraine is nothing new. So what factors can help protect Ukrainian citizens against such disinformation campaigns?

A large body of research conducted in the U.S. shows that greater analytic thinking—the tendency to stop and think rather than simply going with one's gut response—is linked with the ability to distinguish falsehoods from truth. In a recent article, we tested whether this relationship was observed in Ukraine.

Ukraine stands out from other countries because of the sheer volume of disinformation attacks from Russia and its allies, whereas misinformation in Western democracies tends to be less organized and comes from domestic sources. Further, under Soviet communism, Ukraine's media outlets were a part of state propaganda efforts. The government used the media to maintain control of information. As a result, post-communist societies like Ukraine tend to have lower trust levels in the state and media institutions.

Despite the fall of the Soviet Union, Ukraine's media market has remained weak. Journalistic standards are low. It is common for people with power to pay for favorable news coverage. Media owners often use their own money and resources to actively push their outlets to use news coverage to show support for their political patrons. That environment, combined with a growing disinformation campaign from the Kremlin and its supporters, has made it even more difficult for Ukrainians to tell the difference between truth and disinformation.

Given the deluge of disinformation Ukrainians have to contend with, we wanted to see if analytic thinking still helped people withstand the influence of these lies or if they are simply overwhelmed. In our study, we conducted online and face-to-face surveys (prior to the Russian invasion) of representative samples of Ukrainians. We assessed their level of critical thinking using logic problems with intuitively compelling but incorrect answers (such as, "When you are running a race and you pass the person in second place, what place are you in?"), as well as their belief in a variety of true and false claims about Russia and Ukraine. We drew the false stories from a database constructed by EU vs. Disinfo. We worked with local partners to select true stories. For example, we included true stories about Russia bombing hospitals in Syria, and falsehoods about Ukrainians supposedly infecting the Sea of Azov with cholera.

Our finding was hopeful. Despite low trust in government and media, weak journalistic standards, and years of exposure to Russian disinformation, Ukrainians overall are generally able to distinguish a wide range of Russian propaganda from factually-based content. Importantly, Ukrainians who engaged in more analytic thinking were better able to distinguish disinformation from true statements, even if they were generally pro-Russia. In other words, people who engage in analytic thinking are more likely to rate false stories as false and true stories as true, regardless of their political stance.

Our findings provide compelling evidence for the role of analytic thinking in improving resistance to disinformation. The fact that Ukrainians who engage in more reasoning are better at distinguishing falsehoods from truth, despite being bombarded with disinformation, emphasizes the power of reason and the importance of developing critical thinking skills. Already Ukrainians have created websites that debunk disinformation and provide accurate information, like Stopfake and Texty among others. Ukrainians can build upon these impressive efforts to even further insulate themselves from Russian propaganda with methods that promote analytic thinking.

Ultimately, it is still possible to protect oneself from disinformation even in the face of organized disinformation campaigns. Teaching critical thinking at all ages is of vital importance.  We could all benefit from stopping to think about what we are hearing.


For Further Reading

Erlich, A., & Garner, C. (2021). Is pro-Kremlin disinformation effective? Evidence from Ukraine. The International Journal of Press/Politics, 194016122110452. https://doi.org/10.1177/19401612211045221

Erlich, A., Garner, C., Pennycook, G., & Rand, D. G. (2022.). Does analytic thinking insulate against Pro-Kremlin disinformation? Evidence from Ukraine. Political Psychology. https://doi.org/10.1111/pops.12819

Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007


Aaron Erlich is an assistant professor in the Department of Political Science at McGill University. His current research interests include the impact of information and misinformation in developing countries, measurement, democratization, and experimental design.

David Rand is the Erwin H. Schell professor of Management Science and Brain and Cognitive Sciences at MIT. His work focuses on illuminating why people believe and share misinformation and "fake news," understanding political psychology and polarization, and promoting human cooperation.

How Stories Can Change the World

We are a storytelling species. Every culture has storytelling traditions, and many groups—be they cultural, religious, national, or otherwise—use stories, parables, and folktales to convey their messages, values, and beliefs, and to foster a sense of connection and identity across generations. Children love to hear a good story. Parents, grandparents, and teachers tell stories to children not only to entertain, but also to teach them about their culture and the world.

Of course, stories are also used to mobilize people to war, oppress, and exclude. They argue their side of the conflict, and often use harsh language to refer to their enemies (like vermin or lice).

Can stories help address some of the most important societal problems, such as poverty, mass violence, extremism, gender-based violence, and climate change?

The answer is yes!

Civil society organizations are increasingly using stories as a tool to impact social change, such as to reduce gender-based violence, to reduce HIV transmission, and to promote reconciliation in the aftermath of violence. For instance, in South Africa, a TV series and a radio drama, Soul City, led to shifts in knowledge around domestic violence and what people can do to stop it; it also shifted the public perception that domestic violence was a private affair, facilitating more community action to prevent it.

In the aftermath of genocide in Rwanda, the radio drama Musekeweya depicts how two villages turned violent toward each other, and then how they reconciled. This drama increased cooperative behaviors, intergroup trust, and perceptions about the acceptability of interacting with members of the outgroup.

In my work, I have examined the impact of storytelling to address intergroup conflict and violent extremism. These stories are serial fictional dramas delivered through radio or TV, often depicting a conflict between two or more fictional groups or villages. They tackle the roots of the conflict, the role of different characters in the story, the factors and behaviors that influence escalations, and behaviors that contribute to reconciliation between groups.

How and why do stories change people's attitudes and behaviors? My research and that of others show a number of positive impacts.

  • Through the portrayal of different characters, stories provide different perspectives and understanding of people on both sides of the conflict.
  • People can identify with story role models and enact similar actions in their community.
  • Stories can help people make sense of their reality, or see their reality in a new light.
  • Stories can validate people's experiences, and give voice to their perspectives.
  • Stories raise awareness about important issues, starting community discussions that can lead to community action for social change.  

In my most recent research, I examined the effect of a six-month serial drama that aimed at tackling attitudes and behaviors related to violent extremism in the Sahel province of Burkina Faso. Here, there has been a significant rise in extremist violence over the past few years. The drama's goals were to denounce and reduce support for violent extremism and to raise awareness about the importance of police–community collaboration for fighting violent extremism, as well as about issues that prevent collaboration. 

Working with the non-governmental organization, Equal Access International, which produced the drama, and with funding from the United States Agency of International Development, we carried out an ambitious experimental study. The drama tells the story of a fictional town, which has become the target of attacks by an armed group. The armed group takes advantage of poverty and unemployment to lure youth into their ranks. The show also addresses mistrust between the police and the community, and reveals how this mistrust can be detrimental to protecting the community from violence. Lastly, the drama highlights the actions of two brave characters who work tirelessly to fight corruption in their town, speak up and hold leaders accountable for their actions, and work with businesses and local government to create employment opportunities for the youth. 

We randomly selected 132 villages in one of the Sahel provinces in Burkina Faso to take part in the study, with 22 people in each of 66 villages listening to recorded episodes of the radio drama and 66 villages not receiving the opportunity to listen to the drama. Participants in the listening group were invited to hear the drama together every week for 12 weeks. At the end, all participants from the 132 villages were interviewed individually.

We found some interesting results. Participants who listened to the drama showed lower justification of violent extremism, compared to the group who didn't. They were also more likely to report violent extremism as a priority issue needing to be addressed by the government. Importantly, they reported more willingness to collaborate with the police to curb violent extremism in their communities, and they also were more likely to believe that they had the ability to make a positive impact on their community.  

Our work shows that storytelling could be one possible avenue to promote reconciliation and address conflict peacefully. Using storytelling through mass media could be especially effective because mass media tends to reach a very high proportion of the population.

Research will continue, but it appears to be pretty simple: stories matter.


For Further Reading

Bilali, R. (2022). Fighting violent extremism with narrative intervention. Evidence from a field experiment in West Africa. Psychological Science, 33(2),184-195. Doi: 10.1177/09567976211031895

Bilali, R., & Staub, E. (2016). Interventions in real world settings. Using media to overcome
prejudice and promote intergroup reconciliation in Central Africa. In C. Sibley, & F. Barlow (Eds.), Cambridge handbook of the psychology of prejudice (pp. 607- 631). Cambridge University Press. Doi: 10.1017/9781316161579.027


Rezarta Bilali is Associate Professor of Psychology and Social Intervention in the Department of Applied Psychology at New York University, Steinhardt School of Culture, Human Development and Education. She studies the social psychology of intergroup conflict, historical narratives of violence, intergroup reconciliation, and media interventions in various conflict settings.

The Subtle Yet Persuasive Influence of the Nonverbal Behavior of TV Interviewers

Years ago, we ran a simple basic experiment—and repeated it successfully over and over—showing that the nonverbal behavior of a TV interviewer, even if the viewer doesn't understand or hear the words being spoken, can cause viewers to change their impressions of the interviewees.

Researchers (and viewers) know that broadcasters can deviate from fairness and objectivity and demonstrate differential (preferential or the opposite) treatment of particular individuals, such as a politician, and groups. Targets of media bias intuitively view the bias as "harmful"—but despite its unethical nature, actual causal psychological damage must be empirically proved.

In our latest administration of the Media Bias Experiment, viewers watch one of two versions of a 4-minute political interview held by an unknown interviewer with an unknown politician in an incomprehensible language (Hebrew for English-speaking viewers; Israeli viewers watched without audio). Viewers subsequently rate their impressions of the interviewed politician. The interviewer is friendly toward the politician in one condition and hostile to him in the other. All of the interviewer's clips were taken from actually broadcasted, extremely important interviews he conducted with the two candidates in a National Election in Israel. The interviewee was an entirely different person (a confederate of ours), filmed in the same studio setting, with his video spliced in so it seemed like the original interviewer was talking to him. Importantly, the clips of the interviewee were identical in both experimental conditions. (All videos are accessible at http://www.youtube.com/user/NvStudy.)

In this design, therefore, any differences in viewers' impressions of the interviewee are due entirely to the biasing effect of the interviewer's nonverbal behavior. And indeed, numerous administrations of this experiment (with and without the audio channel) in four countries systematically demonstrated the bias effect.

Today, biased thinking is often explained by Kahneman's "fast and slow" theory. Fast, intuitive, careless thinking often leads to bias, whereas slow thinking is more effortful and can avoid bias. Researchers sought for decades to find ways for reducing or preventing harmful biases, and one way might be to shift people from fast to slow thinking. In terms of our Media Bias Experiment, that would mean to reduce the impact of the nonverbal behavior of the interviewer—because impressions based on nonverbal behavior are very fast and intuitive.

We explored three distinct directions that might contribute to bias reduction or prevention:

  • The interviewer's credibility. In our most recent study, viewers rated their impressions of both the interviewee and the interviewer and were divided into two groups according to their perceptions of the interviewer's credibility—how much they put faith in what he said and did. Those who perceived the interviewer higher in credibility were more susceptible to the influence of his nonverbal behavior and demonstrated the "typical" Media Bias Effect. But no bias effect was found for viewers who did not perceive the interviewer to be a credible source. Ironically, in order to bias your audience, you must appear as a credible source!
  • Trying to be objective. In an earlier study, we tried to shift viewers from fast to slow thinking by instructing them "To try to be objective and to ignore the interviewer's behavior when rating the interviewed politician." The remedy was found effective - it neutralized and even somewhat reversed the Media Bias Effect.
  • Learning about bias. In a different study, we asked whether participation in a media literacy course could reduce media bias. This one was based on American high school students—one group was graduates of a media literacy course and the other group consisted of equivalent students who had just registered for the course. A typical Media Bias Effect was found for this second group, but this effect disappeared among the media literacy graduates, demonstrating the effectiveness of the course in reducing media bias.

The common important lesson from these very different applications of our experiment is that media bias can be prevented or brought under control if people initially adopt a more critical and less naive frame of mind in their exposure to the public media.


For Further Reading

Babad, E., & Peer, E. (2010). Media bias in interviewers' nonverbal behavior: Potential remedies, attitude similarity and meta-analysis. Journal of Nonverbal Behavior, 34(1), 57–78. https://doi.org/10.1007/s10919-009-0078-x

Babad, E., Peer, E., & Hobbs, R. (2012). Media literacy and media bias: Are media literacy students less susceptible to nonverbal judgment biases? Psychology of Popular Media Culture, 1(2), 97–107. https://doi.org/10.1037/a0028181

Tikochinski, R., Babad, E. (2022). Perceived epistemic authority (source credibility) of a TV interviewer moderates the Media Bias Effect caused by his nonverbal behavior. Journal of Nonverbal Behavior 46, 215–229. https://doi.org/10.1007/s10919-022-00397-3


Elisha Babad is Professor Emeritus in educational and social psychology at the Hebrew University of Jerusalem, Israel. His research includes teacher expectancy effects in classrooms and the teacher's pet phenomenon. More recent research is on the influence of TV interviewers' nonverbal behavior on viewers' perceptions of the interviewee, voters' wishful thinking, and the role of nonverbal behavior in student evaluations of teachers.

Refael Tikochinski is a PhD student at the Technion - Israel institute of technology. His research investigates high-level cognitive processes, such as language understanding and processing, focusing on both neurobiological measurements (e.g., fMRI and EEG) and big-data-driven computational models.