Hidden but Widespread Gender Biases Emerge in Millions of Words


Language pervades every aspect of our daily lives. From the books we read to the TV shows we watch to the conversations we strike up on the bus home, we rely on words to communicate and share information about the world around us. Not only do we use language to share simple facts and pleasantries, we also use language to communicate social stereotypes, that is, the associations between groups (for example, men/women) and their traits or attributes (such as competence/incompetence). As a result, studying patterns of language can provide the key to unlocking how social stereotypes become shared, widespread, and pervasive in society.

But the task of looking at stereotypes in language is not as straightforward as it might initially seem. Especially today, it is rare that we would hear or read an obviously and explicitly biased statement about a social group. And yet, even seemingly innocuous phrases such as “get mommy from the kitchen” or “daddy is late at work” connote stereotypes about the roles and traits that we expect of social groups. Thus, if we dig a little deeper into the relatively hidden patterns of language, we can uncover the ways that our culture may still represent groups in biased ways.

Using Computer Science to Uncover Hidden Biases

Recent advances in computer science methods (specifically, the area of Natural Language Processing) have shown the promise of word embeddings as a potential tool to uncover hidden biases in language. Briefly, the idea behind word embeddings is that all word meaning can be represented as a “cloud” of meanings in which every word is placed according to its meaning. We place a given word (let’s say “kitchen”) in that cloud of meaning by looking at the words it co-occurs with in similar contexts (in this case, it might be “cook,” “pantry,” “mommy,” and so on). If we have millions to billions of words to analyze, we eventually arrive at an accurate picture of word meaning where words that are close in meaning (like “kitchen” and “pantry”) will be placed close together in the cloud of meaning. Once we’ve achieved that, we can then answer even more detailed questions such as whether “mommy” is placed as closer in meaning to “kitchen” or to “work.”

Using these and other tools, my colleagues and I saw the potential to provide some of the first systematic insights into a long-standing question of the social sciences: just how widespread are gender stereotypes really? Are these stereotypes truly “collective” in the sense of being present across all types of language, from conversations to books to TV shows and movies? Are stereotypes “collective” in pervading not only adults’ language but also sneaking into the very early language environments of children? Although evidence for such biases has long been documented by scholars, our computer science tools allowed us to quantify the biases at a larger scale than ever before.

To study stereotype pervasiveness, we first created word embeddings from texts across seven different sources that were produced for adults or children including classic books (from the early 1900s), everyday conversations between parents and children or between two adults (recorded around the 1990s), and contemporary TV and movie transcripts (from the 2000s), ultimately totaling over 65 million words. Next, we examined the consistency and strength of gender stereotypes across these seven very different sources of language. In our first study, we tested a small set of four gender stereotypes that have been well-studied in previous work and thus might reasonably be expected to emerge in our data. These were the stereotypes associating:

  • men-work/women-home
  • men-science/women-arts
  • men-math/women-reading
  • men-bad/women-good

Stereotypes Really Are Everywhere In Our Language

Even though our seven kinds of texts differed in many ways, we found pervasive evidence for the presence of gender stereotypes. All four gender stereotypes were strong and significant. Moreover, there were no notable differences across child versus adult language, across domains of stereotypes, or even across older texts versus newer texts. To us, this consistency was especially remarkable in showing that even speech produced by children (as young as 3 years old!) and speech from parents to those young children revealed the presence of gender stereotypes that have not been documented on such a big scale at such young ages.

Having shown pervasiveness for these four well-studied stereotype topics, we next turned to gender stereotypes for more than 600 traits and 300 occupation labels. Here, we found that 76% of traits and 79% of occupations revealed meaningful associations with one gender over another, although not all were large in magnitude. The strength of gender stereotypes of occupations was stronger in older texts versus newer texts; and the strength of gender stereotypes of traits was stronger in adult texts versus child texts. And yet, we also saw continued evidence of consistency. For instance, across most of our seven different kinds of texts the occupations “nurse,” “maid,” and “teacher” were stereotyped as female, while “pilot,” “guard,” and “excavator” were stereotyped as male.

By bringing together both the unprecedented availability of massive amounts of archived naturalistic texts, and the rapid advances in computer science algorithms to systematically analyze those texts, we have shown undeniable evidence that gender stereotypes are indeed truly “collective” representations. Stereotypes are widely expressed across different language formats, age groups, and time periods. More than any individual finding, however, this work stands as a signal of the vast possibilities that lie ahead for using language to uncover the ways that biases are widely embedded in our social world.


For Further Reading

Charlesworth, T. E. S., Yang, V., Mann, T. C., Kurdi, B., & Banaji, M. R. (2021). Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words. Psychological Science, 32(2), 218–240. https://doi.org/10.1177/0956797620963619

Caliskan, A., Bryson, J. J., & Narayanan, A. (2016). Semantics derived automatically from language corpora necessarily contain human biases. Science, 356(6334), 183–186. https://doi.org/10.1126/science.aal4230


Tessa Charlesworth is a Postdoctoral Research Fellow in the Department of Psychology at Harvard University where she studies the patterns of long-term change in social cognition.

 

Beyond First Appearances

We’ve often been told that you shouldn’t judge a book by its cover, and, psychologically speaking, it’s good advice. A book’s cover does not provide much information about what’s inside. It might give clues to the genre of the book and maybe the name of a familiar author, things that may lead you to pick it up or leave it alone. But most book-buyers want more information.  So, we pick up the book, read the blurb on the back, or maybe flip through the pages. We interact with the book.  We go beyond judging the book by its cover to actively seeking more information about it.  

Our perceptions of other people follow the same pattern. Yes, we do judge other people at first sight. In a way, we have to. To manage everyday life, we often must form quick impressions of other people. Is this person going to hassle me? Is this person going to be nice? What are the risks and rewards of interacting with this person?

These judgments are often based on our generic beliefs that allow us to quickly categorize other people based on easy-to-see characteristics such as the person’s gender, skin tone, fashion choices, and so on.  Given that these judgments are based on our general stereotypes about groups of people, it is no surprise that our judgments about particular individuals can be inaccurate and even prejudicial.

A long history of research in psychology has studied first impressions of people based on photographs of their faces. This research has consistently shown that people’s initial judgments of others are based on stereotypes linked to physical appearance. Given how many first impressions we now form from photographs that we see on social media or the web more generally, forming first impressions based on photographs of people is more and more common.

My research focuses on what happens next. How stable are the first impressions we make from a photograph of a face? Some interesting research shows that seeing photographs of people smiling decreases the strength of people’s initial prejudicial judgments. When people display a sociable, friendly expression, our perceptions of them change. What was interesting to me, though, was that psychologists have not studied whether the impressions we form from seeing someone’s photograph survive a short interaction with the person.

We conducted a study where we introduced pairs of strangers to each other. Before they met, each participant saw a photograph of the other person’s face and rated their initial impression of the person on characteristics such as friendliness, confidence, likeability, energy, aggressiveness, threat, and so on.  The two participants then met and interacted with each other for up to 5 minutes.  During their interaction, the two participants could talk about anything they wanted and were not limited in what they could talk about.

The results of the study showed that, after interacting, the participants’ perceptions of each other tended to become more positive than their ratings had been after seeing only the photograph of the person. After interacting, participants perceived each other as more friendly, likable, confident, and energetic, as well as less aggressive and threatening.

We also found that the judgments of the other person’s personality were more accurate after the interaction. In particular, participants were better at recognizing how anxious, energetic, creative, and confident the other person was in general.  It took only 5 minutes (or less) of general conversation to ‘soften’ first impressions that were based on the photograph to be more positive and less negative, as well as more accurate.  

First impressions are an important topic of research for many areas of psychology such as police interviewing (which I study), job interviews, and dating, but we do not know much about how first impressions are affected by short social interactions. Given that much of the research in this area focuses on people’s impressions after seeing photographs of people, we need to study what happens when people first meet.

Other people offer a world of information and experiences that you might learn from or enjoy. So, you can disadvantage yourself by restricting your interaction with them based on a bad—and often inaccurate—first impression of someone from their appearance alone. Just like a book cover provides only a hint about its content, first impressions provide only a hint about what others are like, and we should take the time to interact, however briefly, with other people to understand them better.


For Further Reading:

Satchell, L. P. (2019) From photograph to face-to-face: Brief interactions change person and personality judgments. Journal of Experimental Social Psychology, 82, 266–276. doi:10.1016/j.jesp.2019.02.010. https://psyarxiv.com/f9xzy

Funder, D. C. (2012). Accurate personality judgment. Current Directions in Psychological Science, 1, 177–182
 

Liam Satchell is a lecturer in psychology at the University of Winchester, specializing in personality, methodology, and everyday uses of psychological research.

The Secret to Easy Theory

By Kurt Gray

We all know Kurt Lewin’s aphorism “there is nothing so practical as a good theory.” Unfortunately there is a divide between knowing theory’s importance and knowing exactly how to do it.  How should one represent the structure of science—the nomological net of ideas? This post explores a new and simple way to depict theory: theory mapping.

Typically, we present theory through words in introductions and general discussions, but this is less than ideal for three reasons.  First, words can be slippery, so that it’s not always clear what the authors mean.  Second, it takes a lot of time to sift through the theory sections of papers.  Third, even after you have read a lot of theory, it is not always clear how it all fits together, as authors typically focus on their own sub-field.  Is there an easier way to represent theory?

We also all know the aphorism “a picture is worth a thousand words,” and it applies to theory too. Theory mapping is a technique that visually maps out the connections between concepts, and therefore allows people you to understand the contours of a field at a glance.  It was developed to help bring the same kind of rigor to theory as to methodology.  It can be read about in full in an upcoming issue of Perspectives on Psychological Science (and at www.theorymaps.org), but I’ll provide a sneak peek here. 

Theory mapping uses five elements to display links between ideas, which I’ll illustrate here with cars—not because they are psychologically interesting, but because they are simple to understand.  In the actual paper, I use both cars and my research in moral psychology to display theory mapping. The website also has a set of theory maps provided by eminent scholars in the field (listed below).  I have included the theory map by Jonah Berger on word of mouth at the end of the article as an example.

Table 1.  Theory maps provided on www.theorymaps.org

Topic

Map Authors

Word of Mouth

Berger

Empathy

Cameron, Scheffer, Spring & Hadjiandreou

Motivation

Etkin

Cultural Tightness

Gelfand & Jackson

Revenge

Gollwitzer & Stouten

Facial Expressions

Jack

Emotion

Lindquist

Social Power

Magee, Galinsky & Rucker

Endowment Effect

Morewedge

Stress

Muscatell

Priming

Payne

Health Behavior

Sheeran & Rothman

Ideology

Stern & Ondish

Emotion Regulation

Tamir & Vishkin

Mind Perception

Waytz & Gray

 

The Elements of Theory Maps:

1. Positive and negative associations.  The most basic elements of theory is whether concepts are correlated, whether positively or negatively.  In psychology, we often discuss whether concepts are connected (revealing convergent validity) or not connected (revealing divergent validity), and so these relationships are important to map.  Positive associations are demonstrated with a line between concepts, and negative associations are demonstrated with a line with a dash through it. With cars, we can see that the size of a car is tied to more safety in an accident, but to less fuel efficiency. 

Feature Image

2. Moderation.  In psychology, many phenomena are moderated by situational and individual differences.  For example, the enjoyability of traveling through Asia is moderated by openness to experience.  Moderators are revealed by a concept in italics enclosed within « ».  With cars, the price of a car is moderated by the amount of horsepower, with more powerful cars costing more.

Feature Image

3. Fundamental elements.  Psychological phenomena are constructed through the combination of basic elements, whether cognitive or neural. For example, face recognition is constructed through basic perceptual, cognitive and social processes.  These fundamental elements are represented within an upward-pointing { symbol.  We can see that a car is constructed from a combination of a chassis, an engine and a body.

Feature Image

4. Varieties or Examples. Psychological constructs vary across context and culture.  For example, cruel behaviors can involve physical violence, verbal vitriol, or social ostracism. Theory mapping displays the different varieties or examples using a dashed line connected to grey text.  With cars, there is a variety of brands, which vary by the country of production.

Feature Image

5. Numbers and notes.  A picture may be worth a thousand words, but even pictures cannot capture everything.  Theory maps can be supplemented with notes that are tied to number throughout the map.

Putting it all together.  Below, you can see a total theory map of cars, and below that you can see two other theory maps, one for word of mouth, and one for mind perception.  See www.theorymaps.org for the example of moral psychology and maps from other researchers.

By tying theory to a visual map, theory mapping allows people to see theory at a glance.  It provides specificity by allowing researchers to concretely specify interconnections between constructs.  It also provides synthesis by allowing researchers to evaluate the coherence of their ideas and see how they connect with other ideas.  Theory mapping is a new technique, but one that may help to improve the rigor of psychological science.  Consistent with open science, it is also open to all.  If you—or your methods class—would like to submit a theory map on the website, just contact me through the website.  Happy theorizing!

Cars:

Feature Image

Word of Mouth (Jonah Berger):

Feature Image

Mind Perception (Waytz & Gray): 

Feature Image

 

 

Onward and Upward with Psychology

The first-ever meeting of the Society for Improving Psychological Science (SIPS)—even that name is uncertain—was radically different from a typical psychology conference. Attendees didn’t just learn about new research on how the scientific process can be improved, we worked for three days to try to immediately and tangibly improve psychological science.

The feel of the meeting—like the feel of the Center for Open Science (COS), which hosted it—was that of a tech start-up. Brief talks were given by researchers on improving science, but the bulk of the agenda was devoted to completing group projects intended to improve scientific practice and norms.

How can we improve teaching and training of psychology? Given how quickly the field is moving with regard to methods and proper interpretation of statistics, what can instructors, graduate students, and active researchers do to keep up?

How can journals and societies improve their practices to encourage open science?

How can we improve hiring and promotion—and better acknowledge contributions to science (like providing materials, code, or expertise) that don’t always show up in publication counts?

How can we make replication and data sharing normal parts of psychology?

Proposals were generated, explored, and either pursued or discarded in favor of more fruitful-seeming ideas. Brian Nosek, describing the approach used at the COS, encouraged participants to consider the “80/20 Rule”—often times 20% of the work on an aspirational goal will yield 80% of the benefits. The focus of our efforts was on small changes that could have big impacts.

The group was large, international, and inclusive—anyone who expressed interest in attending was invited, and everyone who requested assistance for housing or travel was given it. Given that founder Simine Vazire and the organizing committee planned the meeting in ~6 months, the turnout was impressive.

The core of the membership was attracted by the idea of helping to improve psychological science—although even the name Society for Improving Psychological Science was debated, because some members worried that it implied psychology needs improvement because it is deficient.

Of course, the idea that psychology needs improvement is something many in social and personality psychology have encountered, after several high profile failures to replicate studies (RPP[1]; Ego Depletion[2]; Power poses[3]; etc.). There is a growing consensus that psychology—and science in general—often reports results of research that are unreliable, because the pressures of publication on hiring and promotion create incentives to manipulate data and statistics inappropriately.

Yet what inspired me most about this meeting was seeing concrete action being taken. Articles from the 60’s[4], 70’s[5], 80’s, 90’s[6], and 2000’s[7] by prominent researchers all describe how a series of common methodological problems prevent psychology as field from accumulating an accurate and reliable body of knowledge. (For a disheartening but well-argued description of why, see the new paper “The Natural Selection of Bad Science.”[8]) Every decade, however, researchers seem to shake off these criticisms not by addressing them but by reinforcing the status quo—until now.

A critical mass of researchers has finally coalesced to address these issues and consider how we can do better psychology research. There isn’t just one solution, but many. And the solution can’t come from just one research group, but needs to be part of a larger discussion in the scientific community—a community of researchers that wants to proactively tackle problems with the way psychology is done.

To that end, I point readers to the SIPS page on the Open Science Framework (OSF), which contains open materials regarding all proposed changes. From collecting centralized repositories of materials, to encouraging journals to adopt open science badges, to creating a “Study Swap” where researchers can agree to replicate each others’ studies before publication—to name a few—there are many opportunities for interested people to get involved. Open Science refers not just to the methods, but to the inclusion of the entire scientific community. Psychologists, let’s all help each other improve.

Visit the SIPS page!

https://osf.io/jtcu9/


References:

[1] http://science.sciencemag.org/content/349/6251/aac4716

[2] http://www.psychologicalscience.org/redesign/wp-content/uploads/2016/03/Sripada_Ego_RRR_Hagger_FINAL_MANUSCRIPT_Mar19_2016-002.pdf

[3] http://www.ncbi.nlm.nih.gov/pubmed/25810452

[4] http://meehl.umn.edu/sites/g/files/pua1696/f/074theorytestingparadox.pdf

 

[5] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.8918&rep=rep1&type=pdf

https://www.researchgate.net/profile/Gerd_Gigerenzer/publication/232481541_Do_Studies_of_Statistical_Power_Have_an_Effect_on_the_Power_of_Studies/links/55c3598c08aeb975673ea348.pdf

[6] http://meehl.umn.edu/sites/g/files/pua1696/f/144whysummaries.pdf

http://psych.colorado.edu/~willcutt/pdfs/Cohen_1990.pdf

https://www.mpib-berlin.mpg.de/volltexte/institut/dok/full/gg/ggstehfda/ggstehfda.html

http://meehl.umn.edu/sites/g/files/pua1696/f/169problemisepistemology.pdf

 

[7] http://pubman.mpdl.mpg.de/pubman/item/escidoc:2101336/component/escidoc:2101335/GG_Mindless_2004.pdf

http://www.ejwagenmakers.com/2007/pValueProblems.pdf

 

 

Who Is Included When We Study Romantic Relationships?

"Does this apply to LGBTQ+ people?" "How might this play out in interracial couples?" "Would we expect the same relationship dynamics in other parts of the world?" These are common questions psychology students raise in courses about romantic relationships when they notice how frequently relationship studies sample college students or White married couples. They wonder whether the relationship principles they're learning about apply to other groups of people as well.

As researchers, this is a question we're eager to answer too. Relationships are key to well-being, and a deep understanding of relationships can help us to identify what makes people thrive. But if researchers only study the relationships of one specific group of people, we may be missing out on important information about what helps everybody else sustain positive relationships. 

A Review of Relationship Research Study Demographics

We conducted a review of relationship studies to understand whose experiences romantic relationships research addresses. We asked three main questions: (1) How diverse are the samples used in these studies? (2) Are researchers writing about their samples in ways that are attentive to participant diversity? (3) Has the field improved on sample diversity and inclusive reporting over time?  

To answer these questions, we searched eight major journals that publish social psychological research about relationships. We focused on two timeframes (1996-2000 and 2016-2020) and identified 1,762 studies about romantic relationships that were published in the eight journals we selected in those timeframes.

Next, our research team collected information about each sample, including region, race/ethnicity, gender, sexual orientation, and socioeconomic status. We also tracked how researchers wrote about their samples—examining if demographics were reported, and the terminology used when they were. This attention to terminology gave us a sense of whether the typical study used inclusive language.

For example, we noted how frequently researchers used language like "heterosexual couples" to describe their samples. This language is unclear, since it's hard to tell if researchers mean the individuals in the couples are heterosexual and would self-identify as such, or that the couples are made up of men and women. The problem is that this term runs the risk of erasing bisexual people—if all relationships between men and women are described as "heterosexual," this language can erase the unique identities and relationship experiences of bisexual people.

Are Relationship Research Samples Diverse?

What did we find? Information about participants' gender was commonly reported, though acknowledging the existence of transgender people and including nonbinary participants was uncommon—the median percentage of nonbinary participants in research samples was 0%, and inclusion of transgender participants was mentioned in less than 2% of all studies.

Encouragingly, reporting of sexual orientation rose from 7.9% of studies in the 1996-2000 timeframe to 20.9% in the 2016 to 2020 timeframe, suggesting researchers may be paying more attention to sexual orientation when studying relationships. However, around 20% of studies in the later time period still used language like "heterosexual couples," and representation of lesbian, gay, and bisexual people was extremely low in both timeframes—the median percentage of LGB participants was 0%.

For studies from the U.S., reporting of race increased from 51% to 68% between time periods. However, representation of many minoritized racial groups remained low. For example, the median percentage of Black participants in a sample from 2016-2020 was just 7.1%. Additionally, about 20% of studies reported only the percentage of White people in their sample, without mentioning any other racial group—a reporting approach that can centralize White people as the default and treat people of color as a homogenous group.

Across both time periods, the U.S. was the most common region sampled (62% of samples were from the U.S.; 83% of samples came from the U.S., Europe, or Canada), suggesting more needs to be done to conduct relationship research that incorporates a truly global perspective.

Given that research samples are limited in their diversity, people reading about relationship science research should ask "Who do these findings describe?" when thinking through the research's implications. Ideally, as research progresses, "White, heterosexual Americans" will become a less common answer to that question. 

How Can We Improve the Way We Do Relationship Research?

So where can we go from here? First, individual researchers can prioritize writing clearly and comprehensively about the demographics of their samples, using language that avoids centering White, heterosexual, U.S. samples and "othering" groups that do not share these characteristics. For example, researchers can report the representation of each gender and racial group (e.g., "the sample was 60% men, 38% women, and 2% nonbinary"), rather than highlighting only the percentage of societally advantaged groups like men or White people (e.g., "the sample was 60% men").

More broadly, academic journals and professional societies can update their norms and policies, such as requiring reporting of basic demographic information. Researchers' own identities and experiences can also influence what gets researched and whose perspectives are valued, so ensuring the field welcomes researchers from all backgrounds may also invite greater diversity in what research is explored.

Relationship research informs our understanding of what makes relationships thrive, but more work needs to be done to ensure that our findings apply to a truly diverse group of people. By making changes to our research practices, we can take a step toward that future—and provide more satisfying answers to people eager to understand everyone's relationships.    


For Further Reading

McGorray, E. L., Emery, L. F., Garr-Schultz, A., & Finkel, E. J. (2023). "Mostly White, heterosexual couples": Examining demographic diversity and reporting practices in relationship science research samples. Journal of Personality and Social Psychology,125(2),316–344. https://doi.org/10.1037/pspi0000417.

Roberts, S. O., Bareket-Shavit, C., Dollins, F. A., Goldie, P. D., & Mortenson, E. (2020). Racial inequality in psychological research: Trends of the past and recommendations for the future. Perspectives on psychological science15(6), 1295-1309.  https://doi.org/10.1177/1745691620927709

Garay, M. M., & Remedios, J. D. (2021). A review of White‐centering practices in multiracial research in social psychology. Social and Personality Psychology Compass15(10), e12642. https://doi.org/10.1111/spc3.12642


Emma McGorray is a PhD candidate in social psychology at Northwestern University. She studies the identities, experiences, and relationships of LGBTQ+ people and how to make research more diverse and inclusive.

Lydia Emery is an Assistant Professor in Psychology at the University of Chicago. Her research examines romantic relationships—how social class contexts influence relationships, and how relationships shape people's identities as individuals and as couples.

Understanding America’s Political Divide: New Methods Using Twitter and Self-Report Data

Political polarization—the increasing ideological divide between liberals and conservatives—continues to engulf the United States, further inflaming the ongoing culture wars. Over the last several decades, differences in views between Democrats and Republicans have been on the rise with 45% of Republicans and 41% of Democrats now thinking that the other party is a threat to the health of the country. Alongside the increasing cultural antagonism, the rise of polarization has been found to contribute to political gridlocks and unresponsive policies*.  

What is leading to this polarization? Research by Matt Motyl (University of Illinois at Chicago) suggests that a cause of polarization might be where people are choosing to live. Motyl’s research has found that people who perceived themselves to be politically out of place in their communities (e.g., Republicans in Democratic majority states) reported more difficulty forming close relationships and were more likely to migrate away. This segregation seems to be reflected in voting patterns. Since 1992, the percentage of people living in landslide districts where one party dominated over the other has risen from 30% to 75% in the recent 2016 election. As people self-segregate into distinct geographies, polarization might increase due to a lack of face-to-face encounters with those across party lines.  

New research by Motyl and Zachary Melton (also at the University of Illinois at Chicago) tested to see whether you can capture different political and moral values reflected strongly in different states in the U.S. In particular, Motyl and Melton tried to see if they could find similar patterns using over 200,000 self-report measures and over 754,00 Twitter profiles. Would moral values like purity and authority (which have been found to be more valued by conservatives) be more widely expressed in Twitter and self-report measures within Republican states, for example? With the Twitter profiles, Motyl and Melton used a textual analysis program that scanned for words that were associated with different moral values.

They found some evidence that moral values of authority and purity were more strongly represented in Republican states across self-report and Twitter data. However, other results weren’t as consistent such as loyalty—another value that is highly rated by conservatives over liberals—where Twitter data didn’t line up with self-reported measures.

As scholars try to understand the sources of polarization using new sources of data, it’s important to be cautious about the sort of conclusions that can be drawn. Motyl noted that only 35% of the American population is on Twitter and that certain contextual factors (e.g., current events in each state) can play a big role in what sort of Twitter language is being used. Future work will look to better understand this political self-segregation and be better able to work with new sources of data that can lend evidence toward different conclusions about how the country is divided.


Written By: Abdo Elnakouri, PhD Candidate at the University of Waterloo

Session: "Mapping Moral Subcultures via Social Media," a talk presented at the Novel Methods for Analyzing Moral Meaning on Social Media symposium, held on February 8, 2019

Speaker: Matt Motyl, University of Illinois at Chicago

Co-Author: Zachary Melton, University of Illinois at Chicago

*Reference: Hetherington, M. J., & Rudolph, T. J. (2015). Why Washington won't work: Polarization, political trust, and the governing crisis. University of Chicago Press.

Inside the World of jamovi

With the amount of statistical software out there, it’s easy for grad students to be overwhelmed and just use whichever software their advisor uses or happens to be taught in their first-year statistics courses. Very rarely do we hear about software directly from the developer, who may be better able to explain the details and give answers to questions we might not have thought of yet.

Jonathon Love is a co-founder and developer of jamovi, a statistics software available for all platforms that is not only intuitive and user-friendly, but also free.

What is jamovi? What can it do?

jamovi is a user-friendly, open statistical spreadsheet, designed to be as simple to use as possible, but still allow very sophisticated analyses. You can see it in the screenshot below – we have the data to the left, results on the right, and if you’re running an analysis (such as an ANOVA like below) the analysis options in the middle.

Screenshot of jamovi software

jamovi provides all the usual analyses necessary for undergraduate statistics programs (t-tests, ANOVAs, regression, contingency tables, etc.), (and it’s been exciting to see a number of universities adopting it into their programs!). But it’s more than an educational tool – it contains many useful features for advanced researchers too, including linear mixed effects models, generalized linear models, and a sophisticated data transform and recoding system.

What is the difference between jamovi and SPSS?

The biggest difference between jamovi and SPSS is that jamovi is much simpler and easier to use. In jamovi, when running an analysis, the results update as you change the options. So you can specify your variables in say, an ANOVA, and you’ll receive an ANOVA table before you click ‘OK’. This lets you go on to make interactive changes to the analysis. If you want to see an effect size, you click that checkbox, and the existing ANOVA table updates to include it. Contrast this with SPSS where you have to specify your options, press OK, and then wade through a torrent of output. If you want to change something, you have to go back and re-run the whole analysis, receiving another torrent of output. The ‘direct feedback’ model dramatically simplifies this whole process, and makes the learning and practice of statistics simpler and less overwhelming. There are a lot of other differences, but this is the one that people love most.

Image of jamovi software displaying on a laptop computer

What is the difference between jamovi and R?

jamovi is different to R in that it’s a graphical statistical spreadsheet rather than a programming language, but jamovi and R are actually great friends. All the analyses in jamovi are written in R, and you can put jamovi in ‘syntax mode’ where it displays the equivalent R code to recreate the analyses in an R session. In fact, there’s even the Rj editor for jamovi, which lets you type R code and run it directly inside the spreadsheet itself! We wanted to make it as easy as possible for people to transition from a spreadsheet to R if that’s the right next step for them. So if you like the comfort of a graphical spreadsheet, but would like to take some tentative steps toward learning R, jamovi is a great place to begin.

Can you tell us about the jamovi team?

So there’s myself, based at the University of Newcastle, Australia. I mostly work on the underlying architecture for jamovi, the “under-the-hood” stuff. Then there’s Damian Dropmann based in Sydney, who’s our user interface designer and developer. A lot of our best user interface concepts have been designed by him. Then there’s Ravi Selker based in Amsterdam, who’s primarily responsible for the analyses in jamovi. Ravi has a rare thoughtfulness and careful attention to detail which have made for a pretty extraordinarily well-refined set of analyses.

The three of us are the core developers, but jamovi is much bigger than us. jamovi is a community project with dozens of other people working on additional materials; online textbooks, video tutorials, new advanced analyses, etc. Science works best when we can build on each other’s work, and so we’ve made community a central aspect of jamovi development.

How can someone get started with jamovi?

jamovi is pretty straightforward to use. I’d recommend downloading and installing jamovi, and just playing around with the example datasets. For people who prefer a more guided experience, I can recommend the jamovi tutorial series provided by Barton Poulson of datalab.cc https://datalab.cc/tools/jamovi, or there’s the learning statistics with jamovi textbook: https://sites.google.com/brookes.ac.uk/learning-stats-with-jamovi. But if you’ve ever used SPSS before, you’ll feel right at home using jamovi – and probably a lot happier.

We hope you enjoy using jamovi as much as we enjoyed creating it.

Download the jamovi software and get started!

The SAGE Model of Social Psychological Research

Following the 2008 global economic crash, the Irish accepted harsh austerity as the national economy collapsed, only to protest in 2014 & 2015 during a stark economic recovery. This paradox raises many pertinent questions for social and cultural psychologists: why do some people not protest when others riot in the streets; how are culturally-salient narratives taken up by individuals in times of social change; under what conditions do people tolerate economic inequality and when does this tolerance give way? The Irish example and the questions it raises represent a common and important challenge for social psychologists: how can we best study the complexities of human behavior in real world settings?

Alone, the experimental paradigm, which currently predominates in the field, is insufficient for understanding this and similar dynamic and unfolding social, cultural, and economic trends. This is because ecologically valid and meaningful hypotheses need to be generated before they can be tested experimentally. Therefore, field methods in social psychology – including writing field notes, conducting participant observation and interviews, analyzing social and mainstream media – can be used to either augment findings from lab social psychology in ecologically valid contexts or to generative relevant hypotheses to understand dynamics, patterns, and casual mechanisms beyond observed social phenomena. Field social psychological methods can also be used to study experiences that cannot be studied in an enlightening way with an experimental paradigm.

In our paper, The SAGE Model of Social Psychological Research, we argue that a synthetic combination of field and lab methods can best be used to conduct ecologically valid social psychological research, to understand the complexities of human thoughts, feelings and behavior, such as the protest dynamics in Ireland during the economic recovery and recession. We developed the SAGE model for social psychological research.

Our SAGE acronym refers to the ways in which qualitative and quantitative methods can be meaningfully used in conjunction to holistically understand social psychological phenomena. We propose a Synthetic model, where qualitative methods are Augmentative to quantitative methods, Generative of new experimental hypotheses, and used to comprehend Experiences that evade experimental reductionism. Currently in social psychology, there still exists a strong tension, separation, and imbalance between methodologies. In a review of flagship psychology journals, we observed that mixed methods including qualitative and quantitative research was extremely rare, and purely qualitative work was nonexistent. In an effort to push the discipline forward, we developed a new model to provide a guiding framework for integrative research methods in social psychological research.

In our article, we outline the ontological differences between qualitative and quantitative methods. However, we argue that no differences ought to exist on a practical level. Therefore, an integrative, mixed-method, model can overcome the limits of each method and be used to further holistic psychological science. This holism is vital for the field to address the power of socio-cultural context in relation to psychological universals.

The SAGE model (synthetic, augmentative, generative, experiential) is first outlined as an integrative whole. Next, we discuss historical and contemporary utilizations to highlight the augmentative, generative, and experiential aspects of the model. We return to the historical foundations of social psychology to highlight the emphasis placed on multi-level, integrative, mixed-method research from the beginnings of our discipline. Moreover, we illustrate the scopes and limits of our SAGE model in relation to our own work. We apply the model to research concerning the dynamics of protest during an economic recession and recovery in Ireland; adolescent educational achievement in the United States; and the moral foundations between Atheists and Evangelical Christians in the United States. We demonstrate how mixed-methods can operate together with regard to each aspect of the model and as a whole. The challenges and benefits of the model are discussed throughout. We also outline the practice of utilizing our model in different research programs.

We conclude the article by arguing multi-methods are necessary to develop our field by producing ecologically-valid and reproducible psychological science and breaking new ground by expanding the scope of what can be investigated and meaningfully comprehended.


The SAGE Model of Social Psychological Research is published in Perspectives on Psychological Science. The article is available, open access, here: http://journals.sagepub.com/doi/full/10.1177/1745691617734863

Séamus A. Power: University of Chicago

Gabriel Velez: University of Chicago

Ahmad Qadafi: University of Chicago

Joseph Tennant: University of Cambridge

Variability is the Future: Modeling Change in Social Psychology

The sheep are loose, and the sheepdogs—two players in a psychology experiment developed by researchers Michael Richardson and Patrick Nalepka—must get them back into the herd! How they solve this problem appears to be governed by a relatively simple mathematical model representing a few different state variables.

When people start playing the game, Richardson explained at a talk on dynamical methods this morning at the Dynamic Systems and Computational Modeling Preconference at the SPSP Annual Convention, they tend to begin with a search and recovery strategy. When a sheep gets too far away from the center, they chase it back in.

But as they get more experienced, they tend to hit upon an optimal strategy: cycle back and forth around the herd in the center, as a pair of oscillators encircling the herd in an even way.

Then Richardson deftly steps through a series of simple mathematical representations of first the search and recovery strategy—the distance of the furthest sheep, the angle of the sheep, the radius of the “home base”—and the oscillating containment strategy. Finally, he includes a parameter that governs how people switch between strategies.

When he demonstrated his model, there were a few spontaneous bursts of surprised laughter from the audience. The behavior of the mathematical representation closely mimicked the play of humans, chasing sheep until they had them rounded up and then running oscillating containment routes.

But this was just one of the many approaches Richardson demoed to examine how behavior unfolds over time. His research group at the University of Cincinnati offers a week-long workshop on Nonlinear Methods for Psychological Science every summer, and his presentation today touched on many of these—from cross-recurrence quantification analysis to explicit mathematical models of coupled oscillators to extracting summary measures of complexity like the fractal dimension.

These models have allowed him to capture patterns of variability across a variety of situations—such as individuals trying to avoid collision following crossed paths or jazz pianists improvising with each other.

In work with Ashley Walton, he has found, for example, that when jazz pianists are riffing off of a simple “drone” versus a standard swing track, they tend to exhibit more coordinated patterns of playing. They believe the simplicity of the drone background doesn’t create enough regularity to allow for more diverse improvisatory moves—while the regular pattern of the swing track allows for these moves. This finding came naturally from an approach focused on variability over time.

Underlying this dynamical perspective is a conviction that psychologists don’t need to just model minds, and they don’t need to just pay attention to summary statistics from experiments. Instead, we should be thinking more about specific task dynamics and how behavior changes over time. When we explore dynamics, we open up a whole new frontier for description and explanation.


Alex Danvers is a PhD student in social psychology studying emotions in social interactions. He uses dynamical systems and evolutionary perspectives, and is interested in new methods for exploring psychological phenomena.