Strength of One’s Mexican Identity Is Linked to Political Ideology

New research suggests a significant relationship between Latino identity and political ideology. The study, led by a Nevada State College (NSC) psychologist, found that U.S. born Mexicans who were less strongly identified with their Mexican heritage were less liberal in their political ideology. Mexican-Americans and more broadly, Latinos, are the fastest growing ethnic group in the U.S. and an important voting demographic.

“Our research may explain why some Latinos are less put off by Trump’s xenophobic, anti-immigrant rhetoric,” said lead author Laura P. Naumann, NSC assistant professor of psychology. “Although they might be forced to check the “Hispanic/Latino” box on a census form, some Latinos do not feel a strong affiliation to their ethnic culture and would prefer to adopt the label American.’ This strong allegiance to an American identity, even over one’s own ethnic heritage, is directly in line with Trump’s message.”

The study surveyed 323 U.S. born Mexican Americans about their political ideology, socioeconomic status, the strength of their identification with Mexican and American cultures, as well as their attitudes towards acculturating to American culture. The study is published online in the journal Social Psychological and Personality Science and co-authored with Verónica Benet-Martínez (ICREA Professor at Universitat Pompeu Fabra; Barcelona, Spain) and Penelope Espinoza (The University of Texas at El Paso).

Those who strongly identified with Mexican culture were also more likely to support attitudes that encouraged the integration of both Mexican and American values and practices into one unified identity (e.g., “I feel that Mexicans should maintain their own cultural tradition, but also adapt to Anglo-American customs”). In contrast, those who held weak Mexican identification were more likely to support full assimilation to American culture (e.g., “I feel that Mexicans should adapt to Anglo-American cultural traditions and not maintain their own.”). The study´s results show that these differences in acculturation attitudes could partially explain why some participants were more or less liberal in their political ideology.

“It is not surprising that biculturally-identified Mexican Americans are more liberal and supportive of socially progressive values,” says Benet-Martínez, senior author of the study and professor of psychology. “The Democratic Party touts the country’s history as a ‘nation of immigrants.’ This branding, implicitly and explicitly, communicates that being American means being proud of all of the cultures and ethnicities that our nation is comprised of.”

One’s level of socioeconomic status also mattered for holding more liberal or conservative ideologies. In line with prior research, those in higher social classes were significantly less liberal, but this was most true for those participants who simultaneously belonged to higher social classes and had the weakest identification with Mexican culture.

On the precipice of the 2016 general election, both political parties want to maximize their appeal with Latino voters. Naumann cautions that cultural appeals may backfire with conservative Latinos because they make salient a cultural identity that is unimportant to this type of Latino, or lumps the Latino constituent into the cultural group that these individuals actively sought to minimize. Conservative candidates would have more success discussing issues that appeal to these Latino´s more salient American or upper class identities.

On the other hand, write the researchers, liberal Latinos gravitate towards candidates who value multiculturalism and who can demonstrate cultural awareness when referencing their group’s cultural traditions, practices, or language.


Study: Laura P. Naumann, Verónica Benet-Martínez, and Penelope Espinoza Correlates of Political Ideology Among U.S.-Born Mexican Americans: Cultural Identification, Acculturation Attitudes, and Socioeconomic Status Social Psychological and Personality Science, DOI: 1948550616662124, first published online August 11, 2016.

Social Psychological and Personality Science (SPPS) is an official journal of the Society for Personality and Social Psychology (SPSP), the Association for Research in Personality (ARP), the European Association of Social Psychology (EASP), and the Society for Experimental Social Psychology (SESP). Social Psychological and Personality Science publishes innovative and rigorous short reports of empirical research on the latest advances in personality and social psychology.

All the cool kids aren't doing it: Teens stink at judging peers' behavior

By Sarah W. Helms

When you think about teenage peer pressure, plenty of images likely come to mind. Perhaps a classic after school TV special or dramatic D.A.R.E. program skit with a dimly-lit basement and one friend saying “Come on, everybody’s doing it.” Indeed, a good deal of prior research has focused on direct forms of pressure between friends. But if these images don’t fully resonate with your own memories of high school, you may be onto something. New research suggests that although these direct forms of pressure may exist, teens likely are influenced in other, more indirect ways too.

Think back to high school. You probably had a pretty good sense of who the cool kids were, as well as who was getting high or having sex or who was studying all day long. Everyone knew what was going on, right?

But what if we were all wrong?

According to this new research, teens think they know how much their peers engage in a variety of potentially risky behaviors such as substance use, theft, vandalism and sex. They also think they know how much their peers engage in healthier behaviors, such as studying and exercising. The only problem is, they’re wrong. And not only that, but the more wrong they are, the more likely they’ll be to increase their own substance use over the next few years.

It’s a classic high school version of “Keeping Up With The Joneses” that may place some teens at risk for unhealthy or even dangerous outcomes.

Overestimating the bad and underestimating the good

So how did we find this out? We examined the perceptions of over 400 high school students at two different schools over the course of several years in two separate studies on peer influence.

In the first phase of the study, 235 10th-graders were studied at a middle-income, suburban high school in the Northeast. First, they identified which of their peers belonged in which social crowds – the jocks, popular kids, burnouts and nerds – using a validated system of peer nominations. Within this process, students could nominate which of their peers belonged to which crowd, and researchers could create standardized scores based on how often individuals were nominated.

Next, students reported their perceptions of how frequently those crowds engaged in behaviors including smoking, drinking, marijuana use, sexual intercourse, oral sex, vandalism, theft, studying and exercising. They also reported on how frequently they actually engaged in these same behaviors, enabling researchers to make direct comparisons between real behaviors and perceptions of behaviors.

It may come as no surprise that the jocks and popular teens consistently emerged as the highest status groups in the school. These teens were well-liked, respected and at the top of the social hierarchy.

What is surprising, though, is how much teens consistently overestimated the risk behaviors of their peers. In virtually every case, the jocks and popular teens did not use more substances, have more sexual partners, or break the rules any more than all the other kids at school. But they were perceived as doing all these risky behaviors a lot.

And these misperceptions also ran in the opposite direction. Take studying, for example – a decidedly less “cool” thing to do during adolescence. Teens over-attributed studying to the nerds by far, even though the nerds really didn’t study more than anyone else in the school.

In short, the teens saw their peers through a caricatured lens. Jocks must be exercising and having sex all the time, then partying all weekend with the popular kids, right? Because that’s what the cool kids do. And nerds must be studying every waking moment…because they’re nerds. But these caricatures were simply wrong.

Why misperceptions matter

Perhaps if the research stopped here, you’d have an interesting tidbit to share at your next high school reunion. “See, we really weren’t all that different after all!” But there’s more. And this is where the misperceptions become concerning from a public health perspective.

In a second phase of the study, a separate group of 166 ninth-graders from a rural, low-income high school in the Southeast provided the same information – who are the cool kids, how much do you think those kids smoke cigarettes, drink alcohol, and smoke pot, and how much do you actually use those same substances? Only this time, the students were followed until the end of their junior year.

Not only were the cool kids misperceived all over again, essentially replicating the findings of the first study, but the misperceptions mattered. Thinking that the cool kids engaged in more substance use as a ninth-grader predicted a faster rate of growth in your own substance use over the high school years. Of course, many youth may increase their substance use over those years anyway. But these findings showed that the rate of increase was much steeper among those who misperceived the social norms the most.

As you can see, this type of indirect “pressure” to keep up with the social norm is quite different from the “Come on, everybody’s doing it” pressure we often warn teens about. It is also difficult to address. Some prior work has attempted to explicitly teach young adults about the “real” social norms. For example, fliers around a college campus might tell students exactly how often other students at their school drink alcohol.

Unfortunately, these campaigns typically are unsuccessful – either because they are easily dismissed as outright lies or because students think those “average” numbers don’t apply to their specific fraternity, sports team or social group. Additionally, there is always a risk that these campaigns inadvertently could suggest to infrequent substance users that they are “underperforming” compared to their peers – certainly not a winning public health message to spread.

At the end of the day, more research is needed to understand how best to intervene with teens. But the current work does show one clear message: all the cool kids are not doing it. Or at least not as often as you may think. Whether you’re a high school freshman or an adult surveying your own social landscape, this is probably an important message to keep in mind. Because striving to meet what you think is supposed to be the social norm seems to be a losing battle.


The ConversationSarah W. Helms, Post-doctoral fellow, University of North Carolina – Chapel Hill

This article was originally published on The Conversation. Read the original article.

Trusting Groups: Size Matters

By Stephen La Macchia

How do you decide whether to approach a group of strangers for help, whether to sign a contract with one company or another, or whether to be fully honest about your abilities and interests when answering questions from a job interview panel?

There are a range of everyday interactions in which an individual must make decisions about how much to trust a group of people. These decisions are sometimes based on limited information and made with little or no previous contact with the group. So how do we decide whether the group is trustworthy?

Despite a vast research literature on the psychology of trust, relatively little is known about how people generally assess the trustworthiness of groups. To date, research has mostly focused on how people judge the trustworthiness of individuals.

In fact, the nonverbal cues of individuals’ trustworthiness are so well established that researchers can easily generate an image of a trustworthy face, and have been able to successfully program a robot to look and act trustworthy [i]! When it comes to the subtle cues or attributes that make a group look trustworthy, however, the picture is a lot less clear.

In a recent research article published in Personality and Social Psychology Bulletin [ii], my co-authors and I shed some light on how people judge groups’ trustworthiness, examining how this is influenced by one basic attribute of groups: their numerical size.

Every type of group can vary in size; there are small and large families, work teams, audiences, organizations, towns, countries. The basic or relative size of any particular group is one of the most readily perceived and easily defined attributes, so it makes sense that a group’s size would be one of the cues people use to judge its trustworthiness. A number of psychological theories indirectly suggest this possibility, but previous research has not directly and thoroughly tested it.

In our article, we present seven studies showing that all else being equal, people trust smaller groups more than larger groups.

Most of these studies were experiments in which we manipulated whether a hypothetical group was relatively small or large given the context, and had participants answer questions regarding their trust, expectations, and approach intentions towards the group. We found a subtle but consistent smaller-group trust preference across a range of contexts. This preference emerged in both abstract judgments and specific interaction scenarios involving large-scale groups (e.g., organizations and towns) and small-scale groups (e.g., decision panels), and both positive and negative potential outcomes.

For example, one study had participants contemplate a trust-sensitive financial decision involving a group (e.g., signing a contract with a company, waiting for people in a town to hand in your lost wallet). Participants trusted the group significantly more if they were told it was relatively small (e.g., one third the size of similar companies or neighboring towns) than if they were told it was relatively large (e.g., three times the size of others), and this positively influenced their willingness to take the corresponding risk towards the group.

In two other studies, participants imagined facing a disciplinary panel (e.g., for having committed academic plagiarism). Participants indicated significantly more trust of the panel and expected significantly less severe punishment when the panel consisted of three people compared to when it consisted of ten people.

So why is it that people take smaller group size as a cue to group trustworthiness?

Well, in two of our studies we measured perceptions of the groups’ warmth and competence (two basic dimensions of group perceptions and stereotypes). In these studies, we found that warmth perceptions (but not competence perceptions) positively mediated the smaller-group trust preference.

In another study, we asked people why they trusted smaller groups more than larger groups (once they had already indicated that they did), and they indicated that small groups are more close-knit, accountable, easy to influence, and evocative of intimacy groups (i.e., family or friends).

Altogether, these results point to the apparent “small = trustworthy” heuristic arising from small group size reminding people of their own intimacy groups and thus evoking perceptions of communal traits such as warmth and accountability. More pragmatically, the tentative finding that people see small groups as easier to influence than large groups—consistent with social impact theory—may partly explain the expectation of more favorable outcomes from a smaller group in a trust interaction.

Further research is needed to replicate this smaller-group trust preference and establish its boundary conditions, as well to investigate possible applied implications. For example, can companies increase their brand trust by making themselves look smaller than their competitors? Can politicians make themselves appear more trustworthy by emphasizing the small towns or organizations they have belonged to? These remain open questions, but our research shows that when it comes to group size and trustworthiness, smaller is generally better.


Stephen La Macchia recently completed his PhD in social psychology at The University of Queensland in Brisbane, Australia. His research interests include trust, group perception, social norms, collective action, and political attitudes and behaviour.

References:

[i] DeSteno, D., Breazeal, C., Frank, R. H., Pizarro, D., Baumann, J., Dickens, L., & Lee, J. J. (2012). Detecting the trustworthiness of novel partners in economic exchange. Psychological Science, 23(12), 1549-1556. doi:10.1177/0956797612448793

[ii] La Macchia, S. T., Louis, W. R., Hornsey, M. J., & Leonardelli, G. J. (2016). In small we trust: Lay theories about small and large groups. Personality and Social Psychology Bulletin. Advance online publication, July 1, 2016. doi:10.1177/0146167216657360

Friendships, Vaccines, and Impressions: Upcoming Studies in SPPS

While many scientists explore what people have in common, several studies published online to Social Psychological and Personality Science show us how differences help us understand individuals.

The company you keep: Personality and friendship characteristics, Michael Laakasuo; Anna Rotkirch, Venla Berg, Markus Jokela

While it is well known that people tend to form friendships with others who have similar personalities, scientists have discovered a connection between personality traits and differences in friendship patterns. Using the five-factor personality model (extraversion, agreeableness, conscientiousness, neuroticism & openness to experience) the research shows people high in openness are about 3% more likely than people low in openness to have friends who are different from them.  People high in agreeableness and extraversion showed more traditional friendship ties. The authors, from the Finnish Family Federation and from the University of Helsinki Institute for Behavioral  Sciences, Finland, analyzed data on 12,098 people and their 34,000 friends from the British Household Panel Survey to investigate how people's personalities are related to various characteristics of their three closest friends.

The behavioral immune system and attitudes about vaccines: Contamination aversion predicts more negative vaccine attitudes, Russ Clay

Higher feelings of disgust predict negative attitudes towards vaccines, according to a recent study from Russ Clay, a psychology professor at the College of Staten Island.  In a pair of experiments, the connection between disgust and negative vaccine attitudes occurred in both student (study 1) and non-student (study 2) groups. The results support findings from other studies on the connections between the behavioral immune system and vaccine attitudes.

Spontaneous trait inferences on social media, Ana Levordashka, Sonja Utz

Research shows how strong reactions help people form impressions of others, even online. Yet the material average people post typically tends to be mild, self-generated, and not particularly extreme. Researchers from the Leibniz-Institut für Wissensmedien (IWM), Tuebingen, Germany, set out to determine if more typical status updates, like “I spilled coffee on my laptop," and browsing behaviors could create the same immediate impressions seen in the more “extreme” settings. Their findings suggest that even with common everyday activities, spontaneous trait inferences occur on social media.


Reporters and media working on a news story may request a copy of these studies. Please contact [email protected] for your media copy.

Social Psychological and Personality Science (SPPS) is an official journal of the Society for Personality and Social Psychology (SPSP), the Association for Research in Personality (ARP), the European Association of Social Psychology (EASP), and the Society for Experimental Social Psychology (SESP). Social Psychological and Personality Science publishes innovative and rigorous short reports of empirical research on the latest advances in personality and social psychology.

Thinking and Feeling In Judging Others

By Alexander Danvers

You’re interviewing a stranger for a job, and while you have “the facts” about their previous job history in front of you, what you’re not sure about is their emotional state. Are they anxious? Excited? Bored?

You think knowing what this job candidate is feeling might help you better know what kind of person she is, but you don’t know how to figure out what those feelings are. Do you try to think through the possible causes of her emotions, making predictions based on your logical analysis? Or do you just “trust your gut” and try to intuitively sense how she is feeling?

If you are like a participant in the first study of Ma-Kellems and Lerner’s new paper “Trust Your Gut or Think Carefully? Examining Whether an Intuitive, Versus a Systematic, Mode of Thought Produces Greater Empathic Accuracy”, you’re likely to choose intuition.

Ma-Kellems and Lerner first study presents data affirming the lay theory most of us hold about feelings: they are the realm of the non-rational. Understanding them must require a non-rational thought process.

Yet in three further studies, the research team demonstrates that those who engage in careful, systematic thought tend to do better at judging what another person is feeling.

In Study 2 of the paper, students in the Harvard executive education program conducted mock job interviews with each other and then reported their own emotions—and their judgments of their partners’ emotions—during the task. They also completed the brief Cognitive Reflection Task (CRT), a set of three logic puzzles that have an intuitive—but wrong—answer, and therefore require overriding intuition to solve.

Performance on the CRT was related to accurate judgment of feelings, suggesting that individuals better able to override intuition were also better able to perceive their partner’s mood.

Study 3 used a simplified procedure to collect a much larger sample of Harvard executive program students. Instead of having individuals conduct full interviews, they had them complete the “Reading the Mind in the Eyes Test”—a measure of empathy used in previous studies where individuals are asked to judge a target’s emotion based just on a picture of their eyes.

Performance on the CRT was again related to accurate identification of feeling, confirming that those who were able to override intuition were better able to perceive a stranger’s mood.

Finally, Study 4 attempted to induce intuitive or analytic thought by having participants randomly assigned to a condition where they were asked to write about a time intuition improved their decision-making—or a time when careful, systemic thinking did. Used in previous studies, this manipulation appeared to shift participants’ thinking styles, based on coding of the complexity of language used by participants in each condition.

After the manipulation, participants again completed mock interviews, and again rated their own—and their partner’s—mood.

Further reinforcing their previous findings, those in the systemic thought condition were more accurate than those in the intuitive thought condition.

Taken as a whole, this package of studies suggests that having feelings—which we perceive as emerging spontaneously, without careful reasoning—is different from judging feelings. Judgments about feelings might involve consideration of factors that are not intuitive, such as how they are different from you or what situation might have preceded this one.

At a broader level, this research addresses questions about different styles of thinking. Dual system models of thought suggest that individuals typically use either a rapid, emotional, intuitive process or a slower, deliberative, reasoned process (these processes give the title to Daniel Kahneman’s best-selling Thinking, Fast and Slow).

Yet this research—and newer conceptions of empathic accuracy, such as Zaki’s theory of cue integration—suggest that understanding others might require the interaction of multiple processes. This research helps make the case that feelings are not just the domain of intuition—they are also the domain of reason.


Alexander Danvers is a PhD student in social psychology at Arizona State University. His research interests include emotion and social interaction, which he approaches from dynamic systems and evolutionary perspectives.

Freaks, Geeks, Norms and Mores: Why People Use the Status Quo as a Moral Compass

The Binewskis are no ordinary family. Arty has flippers instead of limbs; Iphy and Elly are Siamese twins; Chick has telekinetic powers. These traveling circus performers see their differences as talents, but others consider them freaks with “no values or morals.” However, appearances can be misleading: The true villain of the Binewski tale is arguably Miss Lick, a physically “normal” woman with nefarious intentions.

Much like the fictional characters of Katherine Dunn’sGeek Love,” everyday people often mistake normality as a criterion for morality. Yet, freaks and norms alike may find themselves anywhere along the good/bad continuum. Still, people use what’s typical as a benchmark for what’s good, and are often averse to behavior that goes against the norm. Why?

In a series of studies, psychologist Andrei Cimpian and I investigated why people use the status quo as a moral codebook – a way to decipher right from wrong and good from bad. Our inspiration for the project was philosopher David Hume, who pointed out that people tend to allow the status quo (“what is”) to guide their moral judgments (“what ought to be”). Just because a behavior or practice exists, that doesn’t mean it’s good – but that’s exactly how people often reason. Slavery and child labor, for example, were and still are popular in some parts of the world, but their existence doesn’t make them right or OK. We wanted to understand the psychology behind the reasoning that prevalence is grounds for moral goodness.

To examine the roots of such “is-to-ought inferences,” we turned to a basic element of human cognition: how we explain what we observe in our environments. From a young age, we try to understand what’s going on around us, and we often do so by explaining. Explanations are at the root of many deeply held beliefs. Might people’s explanations also influence their beliefs about right and wrong?

Quick shortcuts to explain our environment

When coming up with explanations to make sense of the world around us, the need for efficiency often trumps the need for accuracy. (People don’t have the time and cognitive resources to strive for perfection with every explanation, decision, or judgment.) Under most circumstances, they just need to quickly get the job done, cognitively speaking. When faced with an unknown, an efficient detective takes shortcuts, relying on simple information that comes to mind readily.

More often than not, what comes to mind first tends to involve “inherent” or “intrinsic” characteristics of whatever is being explained.

For example, if I’m explaining why men and women have separate public bathrooms, I might first say it’s because of the anatomical differences between the sexes. The tendency to explain using such inherent features often leads people to ignore other relevant information about the circumstances or the history of the phenomenon being explained. In reality, public bathrooms in the United States became segregated by gender only in the late 19th century – not as an acknowledgment of the different anatomies of men and women, but rather as part of a series of political changes that reinforced the notion that women’s place in society was different from that of men.

Testing the link

We wanted to know if the tendency to explain things based on their inherent qualities also leads people to value what’s typical.

To test whether people’s preference for inherent explanations is related to their is-to-ought inferences, we first asked our participants to rate their agreement with a number of inherent explanations: For example, girls wear pink because it’s a dainty, flower-like color. This served as a measure of participants’ preference for inherent explanations.

In another part of the study, we asked people to read mock press releases that reported statistics about common behaviors. For example, one stated that 90 percent of Americans drink coffee. Participants were then asked whether these behaviors were “good” and “as it should be.” That gave us a measure of participants’ is-to-ought inferences.

These two measures were closely related: People who favored inherent explanations were also more likely to think that typical behaviors are what people should do.

We tend to see the commonplace as good and how things should be. For example, if I think public bathrooms are segregated by gender because of the inherent differences between men and women, I might also think this practice is appropriate and good (a value judgment).

This relationship was present even when we statistically adjusted for a number of other cognitive or ideological tendencies. We wondered, for example, if the link between explanation and moral judgment might be accounted for by participants’ political views. Maybe people who are more politically conservative view the status quo as good, and also lean toward inherence when explaining? This alternative was not supported by the data, however, and neither were any of the others we considered. Rather, our results revealed a unique link between explanation biases and moral judgment.

A built-in bias affecting our moral judgments

We also wanted to find out at what age the link between explanation and moral judgment develops. The earlier in life this link is present, the greater its influence may be on the development of children’s ideas about right and wrong.

From prior work, we knew that the bias to explain via inherent information is present even in four-year-old children. Preschoolers are more likely to think that brides wear white at weddings, for example, because of something about the color white itself, and not because of a fashion trend people just decided to follow.

Does this bias also affect children’s moral judgment?

Indeed, as we found with adults, 4- to 7-year-old children who favored inherent explanations were also more likely to see typical behaviors (such as boys wearing pants and girls wearing dresses) as being good and right.

If what we’re claiming is correct, changes in how people explain what’s typical should change how they think about right and wrong. When people have access to more information about how the world works, it might be easier for them to imagine the world being different. In particular, if people are given explanations they may not have considered initially, they may be less likely to assume “what is” equals “what ought to be.”

Consistent with this possibility, we found that by subtly manipulating people’s explanations, we could change their tendency to make is-to-ought inferences. When we put adults in what we call a more “extrinsic” (and less inherent) mindset, they were less likely to think that common behaviors are necessarily what people should do. For instance, even children were less likely to view the status quo (brides wear white) as good and right when they were provided with an external explanation for it (a popular queen long ago wore white at her wedding, and then everyone started copying her).

Implications for social change

Our studies reveal some of the psychology behind the human tendency to make the leap from “is” to “ought.” Although there are probably many factors that feed into this tendency, one of its sources seems to be a simple quirk of our cognitive systems: the early emerging bias toward inherence that’s present in our everyday explanations.

This quirk may be one reason why people – even very young ones – have such harsh reactions to behaviors that go against the norm. For matters pertaining to social and political reform, it may be useful to consider how such cognitive factors lead people to resist social change.


Christina Tworek, Ph.D. Student in Developmental Psychology, University of Illinois at Urbana-Champaign

This article was originally published on The Conversation. Read the original article.

When You Don't Feel Valued in a Relationship, Sleep Suffers

We spend up to one-third of our life asleep, but not everyone sleeps well. For couples, it turns out how well you think your partner understands and cares for you is linked to how well you sleep.  The results are published in Social Personality and Psychological Science.

“Our findings show that individuals with responsive partners experience lower anxiety and arousal, which in turn improves their sleep quality,” says lead author Dr. Emre Selçuk, a developmental and social psychologist at Middle East Technical University in Turkey.

One of the most important functions of sleep is to protect us against deteriorations in physical health. However, this protective function of sleep can only be realized when we have high quality uninterrupted sleep, known as restorative sleep.

Restorative sleep requires feelings of safety, security, protection and absence of threats. For humans, the strongest source of feelings of safety and security is responsive social partners—whether parents in childhood or romantic partners in adulthood.

“Having responsive partners who would be available to protect and comfort us should things go wrong is the most effective way for us humans to reduce anxiety, tension, and arousal,” says Selçuk.

The research supports findings from the past several years by an international collaboration of researchers including Emre Selçuk (Middle East Technical University, Turkey), Anthony Ong (Cornell University, US), Richard Slatcher and Sarah Stanton (Wayne State University, US), Gul Gunaydin (Bilkent University, Turkey), and David Almeida (Penn State, US).

Using data from the Midlife Development in the United States project, past projects from the researchers showed connections between partner responsiveness, physical health and psychological well-being over several years.

“Taken together, the corpus of evidence we obtained in recent years suggests that our best bet for a happier, healthier, and a longer life is having a responsive partner,” says Selçuk.


Selçuk, Emre; Stanton, Sarah; Slachter, Richard; Ong, Anthony, "Perceived Partner Responsiveness Predicts Better Sleep Quality through Lower Anxiety" Social Psychological and Personality Science Online first August 17, 2016. DOI: 10.1177/1948550616662128.

Social Psychological and Personality Science (SPPS) is an official journal of the Society for Personality and Social Psychology (SPSP), the Association for Research in Personality (ARP), the European Association of Social Psychology (EASP), and the Society for Experimental Social Psychology (SESP). Social Psychological and Personality Science publishes innovative and rigorous short reports of empirical research on the latest advances in personality and social psychology.

Bad Science Evolves. Stopping it Means Changing Institutional Selection Pressures

Science is awesome, but it ain’t perfect. If you’ve been paying attention to the so-called “crises of reproducibility” in the behavioral, biomedical, and social sciences, you know that false positives and overblown effect sizes appear to be rampant in the published literature.

This is a problem for building solid theories of how the world works. In The Descent of Man, Charles Darwin observed that false facts are much more insidious than false theories. New theories can dominate previous theories if their explanations better fit the facts, and scientists, being human, love proving each other wrong. But if our facts are wrong, theory building is stymied and misdirected, our efforts wasted. If scientific results are wrong, we should all be concerned.

How does science produce false facts? Here’s a non-exhaustive list:

  • Studies are underpowered, leading to false positives and ambiguous results [1].
  • Negative results aren’t published, lowering information content in published results [2,3].
  • Misunderstanding of statistical techniques (e.g., misunderstanding of the meaning of p-values [4], incautious multiple hypothesis testing [5,6]) is pervasive, leading to false positives and ambiguous results.
  • Surprising, easily understood results are easiest to publish, putting less emphasis on reliable, time-consuming research that is perceived as “boring.”

These problems are well understood, and, in general, have been understood for decades. For example, warnings about misuse of p-values and low statistical power date to at least the 1960s [7,8]. We know these practices hinder scientific knowledge and lead to ambiguous, overestimated, and flat-out false results. Why, then, do they persist? At least three explanations present themselves.

  1. Incompetence: Scientists just don’t understand how to use good methods. Some of this may be going on, but it can’t be the full story. Scientists are, in general, pretty smart people. Moreover, a field tends to be guided by certain normative standards, at least in theory.
  2. Malicious fraud: Scientists are deliberately obtaining positive results, with a disregard for the truth, for personal gain. There is undoubtedly some of this going on as well (see, for example, the Schön fraud in physics, the Stapel fraud in social psychology, and this fascinating case of peer review fraud in clinical pharmacology).  However, I choose to believe that most scientists are motivated to really learn about the world.
  3. Cultural evolution: Incentives for publication and novelty select for normative practices that work against truth and precision. This is the argument I am presenting here, which Richard McElreath and I fleshed out in a recently submitted paper.  

The Natural Selection of Bad Science

The argument is an evolutionary one [9], and works essentially like this: Research methods can spread either directly, through the production of graduate students who go on to start their own labs, or indirectly, through adoption by researchers in other labs looking to copy those who are prestigious and/or successful. Methods that are associated with greater success in academic careers will, all else equal, tend to spread.

Selection needs some way to operationalize success – or “fitness,” the ability to produce “progeny” with similar traits. This is where the devilishness of many incentives currently operating in scientific institutions (such as universities and funding agencies) comes into play. Publications, and particularly high impact publications, are the currency used to evaluate decisions related to hiring, promotions, and funding, along with related metrics such as the h-index [1]. This sort of quantitative evaluation is troublesome, particularly when large, positive effects are overwhelmingly favored for acceptance in many journals. Any methods that boost false positives and overestimate effect sizes will therefore become associated with success, and spread.  McElreath and I have dubbed this process the natural selection of bad science.

The argument can extend not only to norms of questionable research practices, but also norms of misunderstandings (such as with -values), if such misunderstandings lead to success. Misunderstandings that do not lead to success will rarely be selected for.

An important point is that the natural selection of bad science requires no conscious strategizing, cheating, or loafing on the part of individual researchers. There will always be researchers committed to rigorous methods and scientific integrity. However, as long as institutional incentives reward positive, novel results at the expense of rigor, the rate of bad science, on average, will increase.

A Case Study

Statistical power refers to the ability of a research design to correctly identify an effect. In the early 1960s, Jacob Cohen noticed that psychological studies were dreadfully underpowered, and warned that power needed to dramatically increase in order for the field to produce clear, reproducible results [8]. In the late 1980s, two meta-analyses indicated that, despite Cohen’s warnings, power had not increased [10,11]. We recently updated this meta-analysis [1], and showed that in the last 60 years, there has been no discernible increase in statistical power in the social and behavioral sciences. It remains quite low: the average power to detect a small effect is only 0.24.

This result is consistent with our argument: that incentives for novel, positive results work against individual desires to improve research methods. This is not to say that all studies are underpowered, but it does indicate that the most influential methods, in terms of which methods are adopted by new scientists, may be those that are.

A Computational Model

Although the case study is suggestive, it was important to us to demonstrate the logic of the argument more forcefully. So we built a computational model in which a population of labs studied hypotheses, only some of which were true, and attempted to publish their results. We assumed the following:

  • Each lab has a characteristic methodological power – its ability to correctly identify true hypotheses. Note: this is distinct from statistical power, in that it is a Gestalt property of the entire research process, not only of a particular analysis. If we make the overly simplistic but convenient assumption that all hypotheses are either true or false and all results are either positive or negative, then power is defined as the probability of obtaining a positive result given that one’s hypothesis is true.
  • Increasing power also increases false positives, unless effort is exerted. This represents the idea that one can increase the likelihood of finding a positive result in a cost-free way by using “shortcuts” that allow weaker evidence to count as positive, but increasing the likelihood of finding a true result by doing more rigorous research—such as by collecting more data, preregistering analyses, and rooting hypotheses in formal theory —is costly.
  • Increasing effort lengthens the time between results.
  • Novel positive results are easier to publish than negative results.
  • Labs that publish more are more likely to have their methods “reproduced” in new labs.

We then allow the population to evolve. Over time, effort decreased to its minimum value, and the rate of false discoveries skyrocketed.

Replication Isn’t Enough

But wait, what about replication? In general, replication is not a sufficient measure to prevent rampant false discovery. For one thing, many hypotheses are wrong, and so many replications may be necessary to ascertain their veracity [3] (here’s an interactive game we made to illustrate this point). But let’s put that aside for now. Replication surely helps to identify faulty results. Might incentives to replicate, and punishment for producing non-reproducible results, curb the natural selection of bad science?

Our model indicates that they won’t.

We gave labs the opportunity to replicate previously published studies, and let all such efforts be publishable (and be worth half as much “fitness” as the publication of a novel result). For the lab that published the original study, a successful replication boosted its value, but a failed replication was extremely punitive. In other words, we created a situation that was very favorable to replication. We found that even when the rate of replication was extremely high – as high as 50% of all studies conducted – the decline of effort and the rise of false discoveries were slowed, but not stopped. The reason is that even though labs with low effort were more likely to have a failed replication, and hence less likely to “reproduce” their methods, not all studies by low-effort labs were false, and among those that were, not all of them were caught with a failed replication. Even when the average fitness of high-effort labs was higher than that of low-effort labs, the fittest labs were always those exerting low effort.  

Moving Forward

Science is hard. It’s messy and time-consuming and doesn’t always (or even often) yield major revelations about the secrets of the universe. That’s OK. We do it because it’s absolutely amazing to discover new truths about the world, and also because the knowledge we gain is occasionally quite useful. Being a professional scientist is a nice job if you can get it, and the competition is stiff. Unfortunately, that means that not everyone who wants to be a scientist can get a job doing so, and not every scientist can get funding to carry out the project of their dreams. Some will succeed, and others will fail.

Mechanisms to assess research quality are essential. Problems occur when those mechanisms are related to simple quantitative metrics, because those are usually subject to exploitation. This is true whether we’re talking about the number of publications, journal impact factors, or other “alt metrics.” When a measure becomes a target, it ceases to be a good measure

This idea is often understood in the sense that savvy operators will respond to incentives directly, by changing their behaviors to increase their performance on the relevant measures. This surely happens. But a cultural evolutionary perspective reveals that quantitative incentives are problematic even if individuals are motivated to disregard those incentives. If the system rewards those who maximize these metrics, whether they do so intentionally or not, the practices of those individuals will spread.

This means that it’s not enough to simply look at bad practices and say “Well, I don’t do that, so I’m fine.” We need to look at the institutional incentives – the factors that influence hiring, promotion, and funding decisions – and make sure they are rewarding the kinds of practices we want to spread.

Exactly what those practices are is open to debate. But they will involve rewarding quality research over flashy results. I think recent trends toward open science and reproducibility are good signs that there is widespread motivation to solve this problem, and progress is being made. I also suspect it will take time to fully effect the kind of changes we need.  Such changes need to come from early career scientists, who are in a position to set new standards for the generation and testing of hypotheses.

In his 1974 commencement address at Cal Tech, Richard Feynman characterized the problem quite clearly, illustrating that it was persistent then as it is today:

It is very dangerous… to teach students only how to get certain results, rather than how to do an experiment with scientific integrity. … I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom. 

May we all someday have that freedom. 


Paul E. Smaldino is Assistant Professor of Cognitive and Information Sciences at the University of California, Merced. Website: http://www.smaldino.com/wp

References:

[1] Smaldino PE, McElreath R (2016) The natural selection of bad science. arXiv:1605.09511

[2] Franco A, Malhotra, N, Simonovits G (2014) Publication bias in the social sciences: Unlocking the drawer. Science 345: 1502–1505.

[3] McElreath R, Smaldino PE (2015) Replication, communication, and the population dynamics of scientific discovery. PLOS ONE 10(8): e0136088.

[4] Wasserstein RL, Lazar NA (2016)  ASA’s statement on p-values: Context, process, and purposeAmerican Statistician 70(2): 129–133.

[5] Simmons JP, Nelson LD, Simonsohn U (2011) False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science 22: 1359–1366.

[6] Gelman A, Loken E (2014) The statistical crisis in science. American Scientist 102(6): 460–465.

[7] Meehl PE (1967)  Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science 34: 103–115.

[8] Cohen J (1962) The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology 65(3): 145–153.

[9] That is, it is based on ideas from well-supported theories of cultural evolution. For an introduction, see books by Robert Boyd & Peter Richerson, Alex Mesoudi, and Joe Henrich.

[10] Sedlmeier P, Gigerenzer G (1989) Do studies of statistical power have an effect on the power of studies? Psychological Bulletin 105(2): 309–316.

[11] Rossi JS (1990) Statistical power of psychological research: What have we gained in 20 years? Journal of Consulting and Clinical Psychology 58(5): 646–656.

The Healthiest Eaters Are the Most Culturally "Fit"

How to be a healthy eater depends on culture. A recent study shows that in the U.S. and Japan, people who fit better with their culture have healthier eating habits. The results appear in Personality and Social Psychology Bulletin.

“Our results suggest that if you want to help people to eat healthier—or if you want to promote any type of healthy behavior—you want to understand what meaning that behavior has in that culture, and what motivates people to be healthy in that culture,” says lead author Cynthia Levine.

Healthy eating can help reduce one’s risk for a number of different diseases down the line, including heart disease, type 2 diabetes, and certain types of cancer.

“In the U.S., having choice and control and being independent are very important,” says Levine. “Giving people lots of healthy choices or allowing people to feel that they have control over whether they eat healthy options is likely to foster healthier eating.”

In Japan where the culture places more emphasis on interdependence and maintaining relationships, a focus on choice and control is less likely to be the key to more healthy eating, write the authors.

“Instead,” says Levine, “in Japan, promoting healthy eating is likely to be most effective when it builds on and strengthens social bonds.”

Research

In a series of studies, the international team of researchers from the U.S., Japan, and Chile analyzed samples of eating habits of middle-aged adults in the United States and Japan. The researcher’s utilized data that included how often people eat certain items each week, including fish, vegetables, or sugary beverages, as well as some information on cholesterol and how participants relate to food when under stress.

To understand how well people in each country fit in with the predominant culture, participants responded to a series of statements such as “I act in the same way no matter who I am with” (a statement reflecting independence) or “My happiness depends on the happiness of those around me” (a statement reflecting interdependence). Participants with high scores on independence have the best cultural fit in the U.S. Participants with high scores on interdependence have the best cultural fit in Japan.

Healthy Habits

In the U.S., which favors independence, being independent predicted eating a healthy diet including higher amounts of fish, protein, fruit, vegetables, and fewer sugary beverages. The research also showed the more independent adults were less likely to use food as a way to cope with stress.

While the overall diets in Japan were healthier than U.S. participants, those in Japan who rated themselves as more interdependent showed healthier eating habits then their Japanese peers who did not.

This research is consistent with other work showing that fitting into one’s culture shapes the healthiness of one’s food consumption.

Levine is interested in utilizing these results for future studies that further reveal the role of culture in everyday behaviors.

“We would like to explore how these cultural differences in the meanings of common behaviors can be utilized to encourage healthy eating or healthy behaviors,” says Levine.


Cynthia S. Levine, Yuri Miyamoto, Hazel Rose Markus, Attilio Rigotti, Jennifer Morozink Boylan, Jiyoung Park, Shinobu Kitayama, Mayumi Karasawa, Norito Kawakami, Christopher L. Coe, Gayle D. Love, and Carol D. Ryff (2016) Culture and Healthy Eating: The Role of Independence and Interdependence in the United States and Japan Personality and Social Psychology Bulletin Online before print August 2016.

Personality and Social Psychology Bulletin (PSPB), published monthly, is an official journal of the Society for Personality and Social Psychology (SPSP). SPSP promotes scientific research that explores how people think, behave, feel, and interact. The Society is the largest organization of social and personality psychologists in the world. Follow us on Twitter, @SPSPnews and find us at facebook.com/SPSP.org.

Who Knows the Impressions One Conveys?

People hold beliefs about how others perceive them. For example, whether people see them as attractive, intelligent, and polite. These beliefs may or may not accurately reflect the impression that the person actually conveys, called meta-accuracy.

But why does meta-accuracy occur? It may either reflect that people actually know how others see them, or it may reflect that people project their self-view onto others, presuming that others perceive them as they perceive themselves (Kenny & DePaulo, 1993). Since people’s ratings of their own personalities are generally similar to others’ ratings of their personalities (Connelly & Ones, 2010), such projection should result in meta-accuracy too.

But if we know how people see their own personalities and how others see them, we can separate projection from actual knowledge of others’ perceptions. Doing so, Carlson, Vazire, and Furr (2011) found that impressions by others predicted meta-perceptions even if self-perceptions were controlled. They called this phenomenon meta-insight. Meta-insight implies that people are aware of the differences between their self-perceptions and the impressions they convey. Nevertheless, self-perception seems to have a unique influence on meta-perceptions, implying that both projection and meta-insight are operating (Carlson et al., 2011).

Even though projection and meta-insight can lead to meta-accuracy, not everyone gets it right. In fact, people vary in how accurately they can guess what others think of them. In a study published in the July 2016 issue of Personality and Social Psychology Bulletin, we (Alice Mosch and Peter Borkenau) studied individual differences in meta-accuracy in a German sample. We focused on the effects of psychological adjustment on meta-accuracy because the relationship between adjustment and accuracy is contentious: According to Taylor and Brown (1988), normal, healthy adults are not completely accurate. Instead, they are said to hold unrealistically positive views of themselves. Taylor and Brown (1988) did not address meta-accuracy, but their hypothesis makes it plausible that psychologically adjusted persons project their too-favorable self-views onto others, whereas their meta-insight is reduced. In contrast, other authors (Allport, 1961; Colvin & Block, 1994) suggested that psychologically adjusted persons are more accurate, which suggests that they are also more aware of how they are perceived by others.

Fifty-two groups of four mutually acquainted students participated in our study. They described themselves and each other member of their group on a 30-item measure of the Big Five factors of personality. Moreover, they indicated how they assumed to be described by those other persons, using the same 30 items. Furthermore, to measure aspects of psychological adjustment, the participants filled in a multidimensional measure of self-esteem, and to measure personality disorder symptoms they filled in the 117-item screening questionnaire for the Structured Clinical Interview for DSM-IV, Axis II (SCID-II; American Psychiatric Association, 1994). Finally, the participants were asked how long they knew the other members of their group, how well they knew and liked them, and how well they assumed to be known and liked by them.

Using these measures, we were able to estimate projection and meta-insight for each pair of acquaintances in each group. Averaged across dyads, projection and meta-insight were both operating, just as in the previous study. But we also found a lot of differences in how much projection and meta-accuracy people engaged in.

We were interested in whether these differences could be explained by psychological adjustment. Do psychologically adjusted people project their exaggeratedly positive self-image onto others? Yes. The psychologically more adjusted persons projected more and showed less meta-insight: They were more inclined to believe that others perceived them as they perceived themselves, and they were less aware of the differences between their self-views and the impressions that they actually conveyed.

We also found that acquaintance raised projection: The longer the participants knew another person, and the more they assumed that that person knew them, the more they were convinced that that person perceived them as they perceived themselves.

But why do psychologically adjusted persons show more projection and less meta-insight? The direction of these effects may go two ways: First, adjustment may strengthen projection because feeling better is associated with a more “big picture” thinking style, called holistic processing (Bolte, Goschke, & Kuhl, 2003). Thus lack of distinction between self-perception and conveyed impression may be a result of holistic processing. Second, feeling that one’s self-view is inconsistent with the actual impression one conveys may result in feelings of alienation, low self-esteem, and in personality pathology. These issues are to be clarified by future research.


Peter Borkenau is Professor of Psychology at Martin-Luther University Halle-Wittenberg in Germany. He received his PhD in psychology from Heidelberg University (Germany) in 1982 and published more than 70 articles in international journals. For more information you can check out:
http://www.psych.uni-halle.de/abteilungen/differentiell/mitarbeiter/borkenau/ 

References:

Allport, G. W. (1961). Pattern and growth in personality. New York, NY: Holt, Rinehart and Winston.

American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author.

Bolte, A., Goschke, T., & Kuhl, J. (2003). Emotions and intuition: Effects of positive and negative mood on implicit judgments of semantic coherence. Psychological Science, 14, 416-422. doi:10.1111/1467-9280.01456

Carlson, E. N., Vazire, S., & Furr, R. M. (2011). Meta-insight: Do people really know how others see them? Journal of Personality and Social Psychology, 101, 831-846. doi:10.1037/a0024297

Colvin, C. R., & Block, J. (1994). Do positive illusions foster mental health? An examination of the Taylor and Brown formulation. Psychological Bulletin, 116, 3-20. doi:10.1037/
0033-2909.116.1.3

Connelly, B. S., & Ones, D. S. (2010). An other perspective on personality: Meta-analytic integration of observers’ accuracy and predictive validity. Psychological Bulletin, 136, 1092-1122. doi:10.1037/a0021212

Kenny, D. A., & DePaulo, B. M. (1993). Do people know how others view them? An empirical and theoretical account. Psychological Bulletin, 114, 145-161. doi:10.1037//0033-2909.114.1.145

Mosch, A. & Borkenau, P. (2016). Psychologically adjusted persons are less aware of how they are perceived by others. Personality and Social Psychology Bulletin, 42, 910-922. doi: 10.1177/0146167216647383

Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: A social-psychological perspective on mental health. Psychological Bulletin, 103, 193-210. doi:10.1037/0033-2909.103.2.193