There is less ‘I’ in teams

By Mina Cikara

Mina Cikara, Anna Jenkins, and Rebecca Saxe discuss their new research about how moral behavior changes when we’re part of a group. 

In the right circumstances – or perhaps we should call them the wrong circumstances – ordinary “good” people do extraordinarily “bad”things. Particularly potent examples occur when people are put into groups. A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into “mobs”that commit looting, vandalism, even physical brutality. Mina had a first-hand brush with this phenomenon when she wore a Red Sox hat to a Sox/Yankees game at Yankee stadium.

There are many reasons why grouping people can promote bad behavior: for example, being in a group can make an individual feel more anonymous and less responsible for his or her actions, both of which promote aggression and selfishness. In a recent study (Cikara, Jenkins, Dufour, & Saxe, 2014), we tested a third mechanism by which being in a group could promote intergroup aggression: by diminishing the salience of one’s own personal moral standards.

For individuals acting alone, the current, on-line salience of personal moral standards is one predictor of moral behavior: for example, explicitly reflecting on their personal moral standards makes people subsequently less likely to cheat on a test, or even on their taxes (Mazar et al., 2008; Shu et al., 2012). Social psychologists have theorized that that acting in a group could distract from reflection about personal moral standards and personal identity, by focusing attention on the group’s norms and the group identity, which could facilitate aggression (e.g. Diener, 1979, Van Hiel et al., 2007).

However, this hypothesis is difficult to test, because it’s difficult to measure the immediate accessibility of personal standards of morality. In a recent experiment, we therefore tested this hypothesis using an online, unobtrusive measure of ongoing psychological processes: functional magnetic resonance imaging (fMRI). We capitalized on the observation that a particular region of medial prefrontal cortex (mPFC) has been associated consistently with thinking about the self. Across a variety of tasks, this mPFC region responds more when participants reflect on their own characteristics (Jenkins & Mitchell, 2011; Kelley et al., 2002) or process self-relevant information (Moran et al., 2009) than when they think about, or encounter information relevant to, others. Using an independent localizer to identify this region of mPFC, we then tested whether participants would exhibit less mPFC response to morally-relevant information about themselves when competing as part of a group than when competing as an individual, and, in turn, whether this reduction would predict harming members of the other group.

Our design had three key ingredients:

First, we constructed stimuli that evoked each participant’s own personal morality. Prior to their scanning sessions, participants rated the personal applicability of a large set of statements about moral behaviors (e.g. “I have never cheated on an exam”), enabling us to pre-select subsets that did, and did not, apply to each participant. As a non-moral control condition, we also included sentences that did, and did not, describe each participant’s social-communicative behaviors (e.g. “I check my email more than four times a day.”)

Second, we developed a competitive “game”that participants could play in the fMRI scanner, both alone (i.e. competing against other individuals for an individual bonus of $10) and as part of a group of ten players (i.e., competing against another 10-person group for a shared bonus of $100). We told our participants that they had been assigned to their group based on personality characteristics, and that the other members of their group were present, playing simultaneously, and seeing their mutual outcome during the game. To reinforce this impression, at the beginning of the “group”component, participants saw a video of images of the 9 other members of their group, ostensibly “logging into”the game. During the “game”, participants responded as quickly as possible to sentences about social-communicative behaviors (i.e., the control sentences), and did not respond to any other sentences they saw (i.e., the moral sentences). As a result, we were able to measure the response in each individual’smPFC while they were reading, but not responding to, sentences that did (versus those that did not) describe their own moral behaviors. In light of previous findings, we reasoned that magnitude of activity in mPFC for self-related moral sentences could serve as an index of participants incidentally encoding the self-relevance of these moral behaviors, and therefore as a measure of the salience of personal moral standards.

Third, we assessed people’s willingness to harm individuals from the competing group, even when the harm does not instrumentally serve the participant or their group. After the scanning session, we told participants that we were developing materials for subsequent publicity, and asked them to choose one photograph, from each of four arrays, for publication; two arrays depicted “players”from the participant’s own group, and two depicted players from from the competing group. The photographs in each array varied in how flattering they were (as rated by an independent set of participants), allowing us to quantify the degree to which participants were willing to harm others by publicizing their less flattering moments. Perhaps unsurprisingly, participants harmed competitors more than group-mates, by choosing less flattering photographs.

Consistent with our hypothesis, we found that participants who exhibited lower mPFC response to self-relevant (versus self-irrelevant) moral items while competing as part of a group selected less flattering photographs of competitors (versus group-mates). That is, reduced mPFC response to moral items was associated with greater willingness to harm competotirs. Notably, the relationship between mPFC activation and harm was specific to moral items; there was no relationship between competitor harm and mPFC response to the control sentences (self versus other social-communicative behaviors).

Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an “us”and a “them.”It will be the task of future research to understand why certain individuals are more prone than others to “lose themselves”in intergroup competition. Also, as noted above, this process alone does not account for inter-group conflict: groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as “necessary for the greater good.”Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of “mob mentality.”

Parts of this blog posts have been drawn from the following paper:

Cikara, M., Jenkins, A. C., Dufour, N., & Saxe, R. (2014). Reduced self-referential neural response during intergroup competition predicts competitor harm. NeuroImage96, 36-43.


Starting this summer, Mina Cikara will be an Assistant Professor in the Psychology Department at Harvard University. Her primary line of research examines the conditions under which people and social groups are denied social value, agency, and empathy. She uses social psychological and cognitive neuroscience approaches to study how misunderstanding, failures of empathy, and pleasure at others’ misfortunes—Schadenfreude—unfold in the mind and brain. She tweets about psychology and neuroscience @profcikara and can be reached at [email protected].

Adrianna (Anna) Jenkins received her Ph.D. in Psychology from Harvard University and is currently a postdoctoral scholar at UC Berkeley. She studies the processes carried out by the set of brain regions known collectively as the default network, with a particular emphasis on the function of the medial prefrontal cortex (mPFC). One of her lines of research investigates the relationship between self-reflection and reflection on the minds of others. You can find her papers here and reach her at [email protected].

Rebecca Saxe is an Associate Professor of Cognitive Neuroscience in the department of Brain and Cognitive Sciences at MIT. She studies how the brain constructs abstract thoughts, including thoughts about thoughts. To learn more, check out her TED talk, or reach her at [email protected].

This is My Brain on Social Cognition

By Bob Spunt

I’m an odd breed of Social Psychologist known as a Social Cognitive Neuroscientist. I am this way because of two basic beliefs about people like you and me. The first is that our brains make our minds, and the second is that our minds are made to live in a social world. The former belief is at the heart of the discipline of Cognitive Neuroscience, while the latter is at the heart of the discipline of Social Psychology. When you put these disciplines together, you not only get a mouthful of syllables, you also gain the power to use neuroimaging methods such as functional magnetic resonance imaging (fMRI) to view what is happening inside a person’s head while they do the work of social cognition. My own studies using fMRI have allowed me to photograph hundreds of other people’s brains on social cognition. Recently, I got the opportunity to photograph my own brain on social cognition. I’m here to tell you the story of how I was able to do this. By the end of my story, you may find that you’ve learned a thing or two about how fMRI can be used to measure the inner workings of not just my social mind, but everyone’s social mind, including you, too.

How to Photograph the Social Mind

Image of brain scans

Figure 1. Two forms of proof that I have a brain. Panel a shows three photographs taken from a structural MRI scan of my head that took 5 minutes to complete. Panel b shows three photographs from a functional MRI scan of the same areas of my head. This scan took just 1 second to complete and contains information about the regions of my brain that were most active during that 1 second period. Hence, by collecting many of these scans back-to-back, I can see how my brain activity is affected by second-to-second changes in what I was doing, thinking, or feeling during the scan. This is the basic logic of photographing the mind with an MRI machine.

Three basic ingredients are needed to photograph the inner workings of the social mind. The first is a living brain to view. Fortunately, I not only have a brain, but I’m also alive. The second ingredient we’ll need is a camera for taking photographs of the inside of my head. The camera I used is an MRI machine, namely, a Siemens Trio 3.0 Tesla located at the Caltech Brain Imaging Center. The top panel in Figure 1 shows three sections (or slices) from one of the 3D pictures taken of me during my MRI session. This is from an anatomical scan that took about 5 minutes to complete. Although it proves that I have a brain inside of my head, it does not provide any information about what my brain was doingwhile the scanner was busy taking the picture. To get this kind of functional information, I underwent another kind of MRI called echoplanar imaging (EPI). You can see one of my EPIs in the bottom panel of Figure 1. The pixel intensity in these images provides a measure of what is known as Blood Oxygenation Level Dependent (BOLD) signal. To make a long story short: In order to do their work, brain regions need oxygen, which is delivered via the bloodstream. Importantly, oxygen is routed to different brain areas on the basis of need: When activity in a brain region increases, the body responds by delivering more oxygen to that region; when brain activity decreases, the body responds by delivering less oxygen to that region. These changes in “blood oxygenation” are the basis of the BOLD signal. Critically, whereas my anatomical image took 5 minutes to produce, each one of my EPIs took just 1 second to produce. Hence, by acquiring EPIs consecutively while I use my brain, I can detect changes in BOLD signal that are caused by momentary changes in what is on my mind.

Our third and final ingredient is a method for experimentally controlling when social cognition is on my mind, and when it isn’t. This is the independent variable (IV) that I will use to cause changes in the dependent variable (DV), that is, BOLD signal. Of the three ingredients, this is the most important, because without it I’d have no way to understand what it was that I was looking at when I finally get to examining my brain images. To illustrate the problem here, imagine you’re about to undergo an fMRI for 5 minutes. I give you the following instructions: “Just close your eyes and relax. Do not fall asleep.” Now, ask yourself: What would you be doing, thinking, and feeling during that 5 minute period? Maybe you’d focus on your breathing, or the sound of the running scanner, or the visual images that occasionally pop into your head. Maybe you’ll reflect on a recent conversation with a friend, or worry about an upcoming deadline. Maybe you’ll fall asleep even though I told you not to. Maybe you’d do all of those things, yet at different times during your scan. Who knows? Certainly not me. Thus, when I get to looking at your brain activity, I’ll no doubt be seeing your brain doing something, but I’ll have no idea what for. That is, I won’t know what was on your mind at the time each image was being acquired.

Fortunately, experimentally controlling social cognition is one of the many things I learned to do in the Social Psychology doctoral program at UCLA. At this point, I should stop saying “social cognition” – which is too broad and abstract to be scientifically tractable – and start speaking in terms of the specific part of social cognition my research focuses on, namely, the ability to make attributions about human behavior (Malle, 2007; Heider, 1958). To make sure we’re all on the same page regarding what I mean by attribution, take a quick moment to answer the following question about me: “Why did this Social Cognitive Neuroscientist guy write this post?” If you’re like most people, you’ll have no problem rifling off a variety of plausible answers to this question, such as “He’s trying to teach others about his profession“, “He thought he’d acquire some kind of fame“, and the uninformative (yet still attributional) “He’s doing it because he wants to do it”. Even if you didn’t come up with your own answer, you probably appreciate the plausibility of the answers I provided, and this is illustrated by an implausible answer: “He wants to fall in love”. Our ability to construct and evaluate attributions about behavior is a fundamental part of how we understand the social world. What is happening inside our heads when we do this?

To photograph my own ability to make attribution, I need to be able to control when I am (and am not) using that ability. In my first fMRI studies of attribution, published with Matt Lieberman while I was at UCLA, I used a natural method for putting attribution on other people’s brains: I presented them with either verbal descriptions, photographs, or videos of human behavior, and I asked them to answer the attributional question: “Why is the person doing it?” While this is clearly a method for putting attribution on the brain, it no doubt puts quite a lot of other things on the brain too, and many of these things have little to do with attribution per se. For example, consider how much your mind had to do to answer the attributional question I asked you to answer above, including: (1) moving your eyes while you read the sentence; (2) recognizing the individual words and comprehending the overall meaning of the sentence; and (3) retrieve the knowledge and words necessary to verbalize your answer. None of these cognitive processes arespecific to answering why-questions, since they’re also involved in answering basically any question I might pose to you now. This fact is particularly problematic in fMRI studies, since the EPIs contain information about basically everything that is on a person’s mind, regardless of whether it is or isn’t essential to the explicit task the person is performing. Hence, to get a clear photograph of attribution (and only attribution), I needed a control question that did not require attributional processing to answer, but which required all of the other cognitive processes involved in the attribution condition. Fortunately, the question “Why is the person doing it?” has a natural opposite: “How is the person doing it?” For example, the control question for the one I posed to you earlier would simply be: “How did this Social Cognitive Neuroscientist guy write this post?” Even though you don’t know exactly how I wrote this post, you can probably appreciate the plausibility of an answer like “He used a computer” and the implausibility of an answer like “He rode a bicycle“. Hence, to separate brain activity that is specific to attribution from brain activity that is involved in answering questions in general, I can measure brain activity associated with answering both why- and how-questions, and then simply subtract the latter from the former (for a critical discussion of this idea of “cognitive subtraction”, see Friston et al., 1996).

Why-questions to put attribution on the brain, and how-questions to take it (and only it) off the brain. In Social Psychology, the comparison of why and how questions  is known as the Why/How Task (Fujita, Trope, Liberman, & Levin-sagi, 2006; Freitas, Gollwitzer, & Trope, 2004; Strack, Schwarz, & Gschneidinger, 1985). In the version of the task I performed, I answered a series questions about a set of photographs showing familiar human behaviors. Every photograph appears twice, once for a how-question, and once for a why-question. In cognitive psychology, this is known as an attentional manipulation; in social psychology, I imagine this would also be called a construal manipulation. These are similar ways of saying that what differs across the two question types is not the stimulus I am looking at (i.e., the photograph), but rather the manner by which I attend to, interpret, or construe that stimulus.

I had the necessary ingredients: a brain (me), a camera to view my brain in use (fMRI), and an experimental method to put attribution on and off my brain (Why/How Task). My social mind was ready to have its photo taken. Here’s a brief description of my experience. Upon entering the room housing the MRI scanner, I put on earplugs to protect my ears from the loud noises produced by the scanner while it is running. Then, I lied down on a reasonably comfy table attached to the front of the scanner and my head was snugly postioned inside a 32-channel head coil that looks like a cage-like structure. A mirror was placed on top of the head coil such that I was able to see a monitor positioned at the back of the scanner, which would be used to visually present the Why/How Task to me. My left hand held a squeeze ball that I could use to sound an alarm in the case of an emergency, and my right hand held a button box that I would use to make my responses. After setting me up with all of this equipment, I was moved deep into a 60 cm wide tube (the scanner bore). Even though I’ve been inside an MRI machine more than 10 times in my lifetime, I still felt a little claustrophobic when I first entered that tight little space. This quickly dissipated, and in no time I was using my brain to answer Why and How questions about people in photographs. The Why/How Task took just over 5 minutes to complete. Altogether, my MRI session took about 30 minutes to complete.

Getting My Social Mind’s Photos Developed

Image of graphs relating what subject was seeing and doing to what subject's brain was doing

To develop my mind’s photographs, I’d have to deal with the data first. Immediately after I finished performing the Why/How Task, a single data file was saved to my computer that contained important information about my performance of the Why/How Task, including when the different types of questions occurred, how long it took me to give my answers, and what answers I actually gave. Although that data file weighs in at a meager 7 KB (i.e., 0.007 MB), it is critical for developing my photos because it tells me what my mind was up to at the time each image was being collected. This is illustrated in the first two panels of Figure 2, which shows when I was and wasn’t answering the two types of questions for each of the 300+ images that were collected every second of the scan.

At 7 KB, the behavioral data is easy to process. Unfortunately, the MRI data files are another story entirely. Each whole-brain image file weighs in at just under 1 MB. This is already a lot more data than my behavioral data file, and for good reason: Each image contains 358,400 different volumetric pixels (calledvoxels), each containing information about different parts of the 3D space that my head was placed within. Yet, this is just one of over 300 images of the same size. The result is 300 MB worth of raw data for a task that took just over 5 minutes to complete.
Before I could even begin to analyze my fMRI data, I had to preprocess my images. To minimize the risk of losing you to some other blog that doesn’t contain the words “realignment”, “segmentation”, and “diffeomorphic”, I will keep this short. Preprocessing is as important to fMRI data analysis as it is frustrating, tedious, and distracting to most fMRI researchers. It’s important because the raw data contains many sources of noise that must be minimized in order to improve your chances of seeing the signal you really care about. In my case, I care about changes in BOLD signal that are caused by changes in the kind of question I was answering. The image timeseries contains these changes, no doubt, but they also contain changes caused by a host of other factors, for instance, head motion. Figure 3 is a 30-second video of functional data from the front of my face (the two circles at the bottom are my beautiful blue eyes) for the entire duration of the Why/How Task, and shown at 10 times the actual speed. Here, pixel brightness corresponds to signal strength. It’d be great if this also meant that pixel brightness corresponds to strength of brain activity. Yet, clearly it doesn’t. Just take a look at how much signal change occurs in and around my eyes whenever I blink. If simply moving my eyelids can cause that much signal change, just think how much would be caused by moving my entire head. Because of this, most fMRI studies will literally throw out a participant’s data if they moved their head more than a tenth of an inch! Fortunately, I was virtually a cadaver in there.

Figure 3. Bob’s Brain, Hard at Work. If you look close enough, you can see the mind-body problem.

To detect brain activity that is specifically associated with the attribution process, I needed to know when my brain was and wasn’t on attribution. Fortunately, my behavioral data file allows me to know what kind of question I was answering at the time that every one of my 300+ images was being collected. This, in turn, allows me to predict brain activity changes that are caused by changes in the task I was performing. As you can see in panel c of Figure 2, the timeseries of the two Question types would be expected to produce different patterns of brain activity across time (panel c). Panel d shows actual timeseries data extracted from my visual cortex, which was active regardless of the kind of question I was answering. This shouldn’t be surprising, since every question required visually attending to color photographs of complex social scenes.

Image of brain activity maps

Figure 4. Brain activity maps of my left hemisphere while answering attributional (why) and factual (how) questions. The rightmost map is what is known as a contrast map, and produced by subtracting the activity map for factual questions from the activity map for attributional questions. In all maps, yellow/red indicates a positive relationship while blue indicates a negative relationship. The top row shows the lateral (side) surface of the hemisphere (imagine looking at me while I’m facing to my right). The bottom row shows the medial (inside) surface of the hemisphere (imagine looking at me while I’m facing to my left, and suddenly the right half of my head is removed).

Whereas both why and how questions demand the same kinds of visual processing, they critically differ in their demand on attributional processing. By determining which regions are more strongly associated with answering why questions compared to answering how questions, I can see what’s happening inside my head that is specific to attribution. Figure 4 visualizes how this is done. The two images on the left show the strength of the relationship between each question type and every region of the left hemisphere of my brain. Notice that the two question types result in brain activity patterns that are generally pretty similar. This reflects the fact that the two question types are, by design, very similar in the demands they placed on my mind: Not only can you see strong activity in my visual cortex for both question types, but you can see strong activity in my left motor cortices that is due to the right hand finger presses I made when responding to the task.

To see how why and how questions differ, I simply subtract the latter from the former. This is the Why > How contrast shown in the rightmost map in Figure 4. This map is unthresholded,  thus it shows regions that showed similar levels of activity across the two question types, as well as regions that actually showed a stronger response to how than to why questions. To get those pretty lava blobs you see in publications and in the press, I can statistically threshold the map so that I see the regions that were positively and reliably associated with answering attributional (relative to factual) questions. This map is shown in the top panel of Figure 5. Yup, that’s me. Or rather, that’s me using my ability to make attributions about human behavior. Now that’s a selfie I can get behind.

This is Our Brain on Social Cognition

Let’s see how I compare to a photo of the typical person’s brain on attribution, shown in the bottom panel of Figure 5. This map is essentially an average of the Why > How contrast from a separate sample of 59 healthy adults, all of whom performed the same task while undergoing fMRI. It shows that I used most of the same brain regions that the typical person uses when doing the task. I honestly did not expect my results to look as typical as this. After all, I wrote the test. I had taken the test many times before. I was familiar with all of the photographs, and wrote all of the questions. Yet, I used largely the same parts of my intellect to perform the task as a person who is taking the test for the first time.

Image of Bob's brain in comparison to a typical brain

Figure 5. This is Our Brain on Social Cognition. Panel a shows the regions of my brain’s left hemisphere that were specifically activated when I was answering attributional questions about people, as opposed to when I was answering factual questions about the same people. Panel b displays the same comparison measured in a group of over 50 healthy adults who performed the same task.

Here’s how I explain this strange observation: Making attributions is so natural for the typical person – me included – that the best way for me to approach the task was to just use this ability in same way I use it to make sense of the people I interact with and think about outside of the scanner. Given that such social sense-making is a near constant part of living in human society, I’d say that most of us spend a large part of our entire lives taking – and passing – attribution tests. And when we do, the studies I’ve conducted so far suggest that nearly everyone uses the same parts of their brain to answer all kinds of different attributional questions (Spunt & Adolphs, In Press; Spunt & Lieberman, 2012b; Spunt & Lieberman, 2012a; Spunt, Satpute, & Lieberman, 2011; Spunt, Falk, & Lieberman, 2010). To give just a few examples, my work suggests we use the same cognitive ability to make attributions for both a person’s intentional actions and their emotional facial expressions, and regardless of whether we’re watching their behavior or simply reading about it. In ongoing work, I am examining how attributional processing breaks down in autism spectrum disorders, as well as in people with lesions to parts of the brain thought to be important to social cognition.

But, all that is another story. For now, I’ll conclude by confessing that, stereotypically, I did go into psychology partly because I thought it would help me understand myself. Early on in my research career, I did some research on indecisiveness, and that was definitely motivated by my own difficulties with handling decisions. But I never really saw my research on attribution as a case of “re-search is me-search”. That is, until now.

References

Freitas, A. L., Gollwitzer, P., & Trope, Y. (2004). The influence of abstract and concrete mindsets on anticipating and guiding others’ self-regulatory efforts. Journal of Exp Soc Psychol40(6), 739-752.

Friston, K. J., Price, C. J., Fletcher, P., Moore, C., Frackowiak, R. S. J., & Dolan, R. J. (1996). The trouble with cognitive subtraction. Neuroimage, 4(2), 97-104.

Fujita, K., Trope, Y., Liberman, N., & Levin-Sagi, M. (2006). Construal levels and self-control. Journal of Pers Soc Psychol, 90(3), 351-367.

Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.

Malle, B. (2007). Attributions as behavior explanations: Toward a new theory. In D. Chadee & J. Hunter (Eds.), Current Themes and Perspectives in Social Psychology (pp. 3-26). St. Augustine, Trinidad: The University of the West Indies.

Spunt, R. P., & Adolphs, R. (In Press). Validating the why/how contrast for functional mri studies of theory of mind. Neuroimage.

Spunt, R., Falk, E., & Lieberman, M. (2010). Dissociable neural systems support retrieval of how and why action knowledge. Psychol Sci21(11), 1593-1598.

Spunt, R., & Lieberman, M. (2012a). An integrative model of the neural systems supporting the comprehension of observed emotional behavior. Neuroimage59(3), 3050-3059.

Spunt, R., & Lieberman, M. (2012b). Dissociating modality-specific and supramodal neural systems for action understanding. J Neurosci32(10), 3575-3583.

Spunt, R., Satpute, A., & Lieberman, M. (2011). Identifying the what, why, and how of an observed action: An fmri study of mentalizing and mechanizing during action observation. J Cogn Neurosci23(1), 63-74.

Strack, F., Schwarz, N., & Gschneidinger, E. (1985). Happiness and reminiscing: The role of time perspective, affect, and mode of thinking. Journal of Pers Soc Psychol49(6), 1460-1469