Morteza Dehghani

Morteza Dehghani is an Associate Professor of psychology, computer science and the Brain and Creativity Institute (BCI) at University of Southern California. His research primarily focuses on the intersection of psychology and artificial intelligence. His other research interests include: Theory-based natural language processing with direct applications to moral decision-making, group dynamics and culture;  investigating the role of sacred values in intergroup conflict and negotiation; and computational cognitive modeling. 

Do you have any advice for individuals who wish to pursue a similar career path in social psychology?

Take as many stats and CS classes as you can.

If skill or talent were not an issue, what would be your dream ambition or pursuit in life?

Musician.

If you weren’t in your current job, what would you be doing?

I’d probably be in a research lab in industry doing the same type of work.

What do you enjoy most about teaching?

Getting my assumptions questioned by students. Reading up on new research.

What led to your interest in artificial intelligence?

In grad school, I was in a symbolic AI lab, trying to incorporate psychological findings in AI models. Soon I realized that I wasn’t a fan of that approach to research, and I even seriously considered quitting grad school. While doing my postdoc in psych, I recognized that I can do the opposite: use AI in psychology.

Outside of psychology, how do you like to spend your free time?

I like to spend my free time continuing my training as a Persian classical musician. I have been studying the Radif for the past 8 years, and I also play the setar (not to be mistaken with the Indian sitar).

What’s the best advice you have ever received?

On my last day as a postdoc, I went to my advisor’s office to say goodbye. During our meeting, I expressed that I will be required to teach and train students, while I do not have expertise in anything. He thought for a bit, and then asked me, “if you have plants in your garden, would you need to teach them how to grow?”

If you had an extra hour of free time in the day, how would you use it?

Grab my setar, find a quiet room, and practice!

 

The SAGE Model of Social Psychological Research

Following the 2008 global economic crash, the Irish accepted harsh austerity as the national economy collapsed, only to protest in 2014 & 2015 during a stark economic recovery. This paradox raises many pertinent questions for social and cultural psychologists: why do some people not protest when others riot in the streets; how are culturally-salient narratives taken up by individuals in times of social change; under what conditions do people tolerate economic inequality and when does this tolerance give way? The Irish example and the questions it raises represent a common and important challenge for social psychologists: how can we best study the complexities of human behavior in real world settings?

Alone, the experimental paradigm, which currently predominates in the field, is insufficient for understanding this and similar dynamic and unfolding social, cultural, and economic trends. This is because ecologically valid and meaningful hypotheses need to be generated before they can be tested experimentally. Therefore, field methods in social psychology – including writing field notes, conducting participant observation and interviews, analyzing social and mainstream media – can be used to either augment findings from lab social psychology in ecologically valid contexts or to generative relevant hypotheses to understand dynamics, patterns, and casual mechanisms beyond observed social phenomena. Field social psychological methods can also be used to study experiences that cannot be studied in an enlightening way with an experimental paradigm.

In our paper, The SAGE Model of Social Psychological Research, we argue that a synthetic combination of field and lab methods can best be used to conduct ecologically valid social psychological research, to understand the complexities of human thoughts, feelings and behavior, such as the protest dynamics in Ireland during the economic recovery and recession. We developed the SAGE model for social psychological research.

Our SAGE acronym refers to the ways in which qualitative and quantitative methods can be meaningfully used in conjunction to holistically understand social psychological phenomena. We propose a Synthetic model, where qualitative methods are Augmentative to quantitative methods, Generative of new experimental hypotheses, and used to comprehend Experiences that evade experimental reductionism. Currently in social psychology, there still exists a strong tension, separation, and imbalance between methodologies. In a review of flagship psychology journals, we observed that mixed methods including qualitative and quantitative research was extremely rare, and purely qualitative work was nonexistent. In an effort to push the discipline forward, we developed a new model to provide a guiding framework for integrative research methods in social psychological research.

In our article, we outline the ontological differences between qualitative and quantitative methods. However, we argue that no differences ought to exist on a practical level. Therefore, an integrative, mixed-method, model can overcome the limits of each method and be used to further holistic psychological science. This holism is vital for the field to address the power of socio-cultural context in relation to psychological universals.

The SAGE model (synthetic, augmentative, generative, experiential) is first outlined as an integrative whole. Next, we discuss historical and contemporary utilizations to highlight the augmentative, generative, and experiential aspects of the model. We return to the historical foundations of social psychology to highlight the emphasis placed on multi-level, integrative, mixed-method research from the beginnings of our discipline. Moreover, we illustrate the scopes and limits of our SAGE model in relation to our own work. We apply the model to research concerning the dynamics of protest during an economic recession and recovery in Ireland; adolescent educational achievement in the United States; and the moral foundations between Atheists and Evangelical Christians in the United States. We demonstrate how mixed-methods can operate together with regard to each aspect of the model and as a whole. The challenges and benefits of the model are discussed throughout. We also outline the practice of utilizing our model in different research programs.

We conclude the article by arguing multi-methods are necessary to develop our field by producing ecologically-valid and reproducible psychological science and breaking new ground by expanding the scope of what can be investigated and meaningfully comprehended.


The SAGE Model of Social Psychological Research is published in Perspectives on Psychological Science. The article is available, open access, here: http://journals.sagepub.com/doi/full/10.1177/1745691617734863

Séamus A. Power: University of Chicago

Gabriel Velez: University of Chicago

Ahmad Qadafi: University of Chicago

Joseph Tennant: University of Cambridge

Embrace the Data

By Alex Danvers

What words can classify a movie review as positive? What words classify it as negative?

In the symposium Big Data: Vast Opportunities for Psychological Insight from Mining Enormous Datasets at the SPSP Annual Convention, Harvard economist Sendil Mullainathan threw up some obvious candidates, like “dazzling” or “gripping”—words that researchers brainstormed would do a good job. Using these “theory-grounded” words, a team of computer scientists was able to classify reviews with 60% accuracy—not much of an improvement over 50/50 guessing.

But when the computer scientists let the model empirically determine what was most predictive, some surprising candidates jumped out. For example, the word “still”—as in, “I didn’t like the acting, but still I felt compelled to watch the cinematography”—was highly predictive of a positive review. Using the empirically selected words, a machine learning algorithm was able to classify movie reviews with 95% accuracy.

These contrasting models, according to Mullainathan, represent a shift in ways to approach studying intelligence. Early pioneers in psychology, like Herb Simon, began trying to create artificial intelligence through introspection—reasoning through what processes they followed in order to solve a problem. Once they figured that out, they assumed it would be a simple matter to train up a computer to mimic human processes.

But human intelligence and machine intelligence turn out to be very different. We lack self-knowledge, and there are many tasks—like statistical inference—that humans are known to perform poorly at. What the era of machine learning and big data can offer us is a way of flipping the problem of intelligence from one of introspection to one of empiricism. Ignore intuition. Embrace the data.

Mullainathan was one of four speakers exploring this bottom-up approach, finding surprising results extracted from a huge stack of observations.

In the first talk, Emily Oster found that a reliable change could be found in household food consumption after one member of the household was diagnosed with diabetes. Using only “scanner data”—a record of barcodes from household purchases over several years and over 100,000 people—she was able to detect a statistically significant decrease in consumption of “bad” or unhealthy food after an individual began purchasing diabetes-related products. There was no increase in “good” foods.

In the second talk, Michal Kosinski was able to use Facebook profile pictures to predict Big Five personality traits with significant accuracy: just over 20% for extraversion, agreeableness, and neuroticism, and over 10% for openness and conscientiousness.

Although the deep learning algorithm he used to make these predictions is in some ways opaque, one finding that emerged across different genders and ethnic groups was an association between a broader face and introversion. Earlier research on people’s intuitive judgments of faces had suggested just the opposite: we tend to believe wide faces mean extraversion.

Johannes Eichsteadt used word frequencies from people’s tweets to predict personality, finding that his algorithms matched the accuracy ratings of a close friend—and exceeded it significantly in predicting openness.

Some entertaining findings: introversion is predicted by use of the words “manga,” “anime,” and “pokemon;” extraversion is related to “party” and “!!!”

Eichsteadt also used tweets to categorize counties according to their prevalence of heart disease. He compared his predictions to actual CDC incidence ratings, and found that Twitter alone was a better predictor than all of the most common demographic risk factors combined.

Tweets indicating hostility, aggression, and boredom seemed to indicate a county where heart disease would be high. Tweets indicating skilled occupations, positive experiences, and optimism indicated lower incidence of heart disease.

Finally, Mullainathan used a machine learning algorithm to classify which criminal offenders should be left in jail while awaiting crime. He found that, if we are comfortable with the current 18.7% crime rate, we could be releasing 78% of people—as compared to the 61% that judges currently release. Using these algorithms to determine risk of flight or further crime would significantly decrease burden on U.S. jails and could lead to huge savings.

When we leverage the particular strengths of machine intelligence, we can find surprising and effective new ways of solving problems. This does not mean ceding ground to computers; instead we should use our uniquely human ability to extend our cognitive capacities through tools to explore behavior and mind in ways we never have before.


Alex Danvers is a PhD student in social psychology studying emotions in social interactions. He uses dynamical systems and evolutionary perspectives, and is interested in new methods for exploring psychological phenomena.

Variability is the Future: Modeling Change in Social Psychology

The sheep are loose, and the sheepdogs—two players in a psychology experiment developed by researchers Michael Richardson and Patrick Nalepka—must get them back into the herd! How they solve this problem appears to be governed by a relatively simple mathematical model representing a few different state variables.

When people start playing the game, Richardson explained at a talk on dynamical methods this morning at the Dynamic Systems and Computational Modeling Preconference at the SPSP Annual Convention, they tend to begin with a search and recovery strategy. When a sheep gets too far away from the center, they chase it back in.

But as they get more experienced, they tend to hit upon an optimal strategy: cycle back and forth around the herd in the center, as a pair of oscillators encircling the herd in an even way.

Then Richardson deftly steps through a series of simple mathematical representations of first the search and recovery strategy—the distance of the furthest sheep, the angle of the sheep, the radius of the “home base”—and the oscillating containment strategy. Finally, he includes a parameter that governs how people switch between strategies.

When he demonstrated his model, there were a few spontaneous bursts of surprised laughter from the audience. The behavior of the mathematical representation closely mimicked the play of humans, chasing sheep until they had them rounded up and then running oscillating containment routes.

But this was just one of the many approaches Richardson demoed to examine how behavior unfolds over time. His research group at the University of Cincinnati offers a week-long workshop on Nonlinear Methods for Psychological Science every summer, and his presentation today touched on many of these—from cross-recurrence quantification analysis to explicit mathematical models of coupled oscillators to extracting summary measures of complexity like the fractal dimension.

These models have allowed him to capture patterns of variability across a variety of situations—such as individuals trying to avoid collision following crossed paths or jazz pianists improvising with each other.

In work with Ashley Walton, he has found, for example, that when jazz pianists are riffing off of a simple “drone” versus a standard swing track, they tend to exhibit more coordinated patterns of playing. They believe the simplicity of the drone background doesn’t create enough regularity to allow for more diverse improvisatory moves—while the regular pattern of the swing track allows for these moves. This finding came naturally from an approach focused on variability over time.

Underlying this dynamical perspective is a conviction that psychologists don’t need to just model minds, and they don’t need to just pay attention to summary statistics from experiments. Instead, we should be thinking more about specific task dynamics and how behavior changes over time. When we explore dynamics, we open up a whole new frontier for description and explanation.


Alex Danvers is a PhD student in social psychology studying emotions in social interactions. He uses dynamical systems and evolutionary perspectives, and is interested in new methods for exploring psychological phenomena.