Young Children Use Nonverbal Cues to Detect Deception

Acquiring new knowledge about how the world works is central to development in children’s thinking. For years, researchers focused on the ways in which children take an active role in learning through first hand-experience. By actively engaging with the world around them, children learn that what goes up must come down, that eating certain foods can make them sick, and that things continue to exist even when they’re not in their immediate line of sight. However, there are limits to what can be learned through first-hand experience. History cannot be learned through first-hand experience. The existence of germs cannot be observed directly. Indeed, much of what we know about the world is passed on to us by other people.           

Because other people are such a vital source of information for developing minds, it would be reasonable to expect that young children are not simply going around trusting everything they’re told. Rather, we expect that children would possess ways of selectively learning from good sources and judiciously distrusting bad sources of information. As the last two decades of research in child development have shown us, indeed children as young as 4 years (sometimes even younger) do not trust others randomly, but rather they exhibit systematic preferences that lead them to trust good sources of information.

For example, young children show systematic preferences to learn from familiar sources over unfamiliar sources. They also show preference to learn from native speakers of their primary language over non-native speakers. Children also show preference to learn from adults who have a track record of being correct about things (like calling a ball a ‘ball’) compared to adults who have a history of making mistakes (like calling a ball a ‘car’). These are just some examples of the ways in which children are careful and thoughtful about who they learn from.

What these studies have left unanswered is the question of how young children figure out when someone is a bad source of information whom they should not trust or learn from. As adults, we know that, sometimes, speakers may have an intention to deceive or provide misinformation. For this reason, we pay attention to cues that signal someone might be lying or trying to trick us. One such cue is known as nonverbal leakage, which refers to a speaker’s nonverbal behaviors that ‘give away’ the false message. This can occur in two distinct ways. Nonverbal leakage occurs when a speaker is unable to produce behaviors consistent with the lie, or when a speaker is unable to inhibit behaviors consistent with the truth. Adults often report using nonverbal leakage as a cue to deception, and it is reasonable to expect that young children would draw on similar cues to infer that a speaker might be untrustworthy.

To find out, I asked children between 4 and 6 years of age to watch a series of videos and use what they saw in the video to decide on the location of a hidden toy. In each video, an adult looked inside two separate boxes, one red and one blue. In the videos, the adult would display a dramatic expression of surprise and excitement upon looking inside one of the boxes, let’s say it was the red box, but would say out loud, ‘You should look in the blue box.’ Thus, the adult’s facial expressions were inconsistent with their words. To allow for comparison, I showed other videos in which the adult would look surprised and excited while looking in the red box and would say ‘You should look in the red box,’ which would be consistent because their words and actions matched. Each time, the child had to decide which box they thought had the toy. If, when the cues were inconsistent, children didn’t choose the box the adult said to look inside, it would show they were using the inconsistency between expressions and words as a sign of untrustworthiness, and they were inferring that the expressions were a better indication of the truth than the verbal statement.  

When I looked at the results for the inconsistent trials, what I found were remarkable age differences in children’s decisions to look for the hidden toy. The oldest age group, the 6 year-olds, quickly picked up on the inconsistencies when the adult said one thing but behaved differently and would reliably choose to look for the toy in the box that elicited a nonverbal expression of excitement and surprise. They explained their decisions by referring to the adult’s expression, and sometimes even explicitly said the adult was lying or trying to be tricky. The 4 and 5 year-olds, on the other hand, almost always chose to look in the box the adult said to look in, disregarding the adult’s nonverbal expression. On occasion, some 5 year-olds appeared puzzled by the nonverbal expression and wondered what it could mean, but ultimately chose to trust what the speaker said about where to look for the toy.

This trend was very different from what I saw in the control videos, where the adults’ words and behaviors matched. In those trials, kids of all ages chose to look for the toy in the box that the adult had indicated (through both words and actions).

It appears that children are not indiscriminately trusting of other people. Instead, from a young age, at least by 6 years, children are using nonverbal cues in deciding who is telling them the truth.


For Further Reading

Harris, P. L., Koenig, M. A., Corriveau, K. H., & Jaswal, V. K. (2018). Cognitive foundations of learning from testimony. Annual Review of Psychology69, 251-273.

Mills, C. M. (2013). Knowing when to doubt: Developing a critical stance when learning from others. Developmental Psychology49(3), 404-418.

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind & Language25(4), 359-393.
 

Maliki Eyvonne Ghossainy is a psychological scientist and statistician and currently Senior Research Scientist for the Developing Belief Network at Boston University. Her research integrates social, cognitive, cultural, and biological mechanisms into a model of belief formation across the early childhood years.
 

The Best Way to Detect Lies in Interviews


Liars are nervous and therefore show nervous behaviors that observers can detect. This is the widespread view amongst the general public and professionals such as police and security personnel. Yet this is more a myth than a fact. Research has indeed shown that liars are typically more nervous than truth tellers. Research has also shown that observers (general public and professionals alike) pay attention to nonverbal signs of nervousness when they try to detect deceit. They pay particular attention to gaze aversion (looking away from the conversation partner) and fidgeting (scratching the hand, wrist, or head). 

In fact, liars often do not show more nervous behaviors than truth tellers. There is typically no difference in gaze aversion between liars and truth tellers and liars typically make fewer rather than more movements than truth tellers.

Here’s why the myths are not true. First, the stereotypical view is that liars look away and fidget. Liars therefore actively try to suppress such behaviors to avoid making a suspicious impression. Second, lying often requires more mental effort than truth telling. Research has shown that hard thinking automatically decreases making movements because all the energy goes to the brain. This effect of thinking on making movements is easy to demonstrate. Ask someone to move around and clap their hands while saying the alphabet out loud: A, B, C, etc. Then ask the person to continue moving and hand clapping but now say the alphabet in reverse order: Z, Y, X, etc. You will see that the person will make fewer movements while saying the alphabet in reverse order.

Nonverbal cues to deception are, in fact, typically faint and unreliable. Most nonverbal cues are not related to deception at all and the most diagnostic nonverbal cue is the equivalent of the difference in height between a 15 and 16 year old girl. Such differences are too small to rely upon. Verbal cues to deceit—that is cues conveyed in the words people use—are typically more revealing, with the most revealing cue being the equivalent of the difference in height between a 14 and 18 year old girl—something easily noticed with the naked eye. It is therefore not surprising that if observers just have access to nonverbal cues when attempting to detect deceit, their ability to distinguish between lies and truths does not exceed chance. A large overview of such research showed an accuracy rate of 52%, which does not exceed flipping a coin (50%). However, when observers could only hear the person speaking, their accuracy rate was well above chance at 63%.

The main message for professionals therefore is that they should stay away from observing nonverbal behavior when they attempt to detect deceit in interviews and should listen to speech instead. Observing people’s behavior to detect deceit is culturally determined and reflects training. American professionals are often taught to pay attention to nonverbal behaviors and therefore tend to pay more attention to them than West European professionals, who are more frequently told that nonverbal cues to deception are weak and unreliable.

There is another good reason for professionals not to pay attention to nonverbal behaviors in interviews. The main purpose of an interview is to gather information from the interviewee. To obtain quality information an interviewer needs to listen carefully to what the person says so that they can come up with good follow-up questions. Listening to what someone says and thinking about the next question is already a task that requires much mental effort. To pay attention, on top of this, to the person’s nonverbal behavior becomes too much for the interviewer because it is impossible to listen to speech, observe behavior, and think about the next question all at the same time. Something has to go when interviewers observe nonverbal behaviors which will impair the information gathering part of the interview.

The limitation of listening to speech is that speech content is not always available. For example, take professionals tasked with identifying wrongdoers in public spaces, such as airports. Airports are typically too busy to interview all passengers, and professionals have no option other than observing behaviors when attempting to identify wrongdoers. Behavior Detection Officers (BDO-ers) across the world have been trained on what suspicious behaviors supposedly look like. Those training programs are limited, even those that are based on scientific research. The problem is that scientific research about how wrongdoers behave in such situations is largely absent. Virtually all research on nonverbal cues to deception has focused on how people behave during interviews when they speak. This is an entirely different setting compared to how people behave outside an interview setting when they often do not speak or may not even sit down.

In sum, research into nonverbal cues to deception has typically focused on how people behave in interview settings. The nonverbal cues they produce are typically weak and unreliable and less truly revealing than speech-related cues. Professionals are thus advised to stop observing behaviors in interviews and listen to speech instead.


For Further Reading

DePaulo, B. M., Lindsay, J. L., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129, (1), 74-118. Doi: 10.1037/0033-2909.129.1.74

Vrij, A., Hartwig, M., & Granhag, P. A. (2019). Reading lies: Nonverbal communication and deception. Annual Review of Psychology, 70, 295-317. Doi: 10.1146/annurev-psych-010418-103135

Vrij, A., & Fisher, R. P. (2020). Unraveling the misconception about deception and nervous behaviour. Frontiers in Psychology, section Personality and Social Psychology, 11, 1377. Doi: 10.3389/fpsyg.2020.01377

 

Aldert Vrij is a professor of applied social psychology in the department of psychology at the University of Portsmouth in Portsmouth, England. His main area of expertise is nonverbal and verbal cues of deception.

 

 

 

Chameleons at Employment: How Job Applicants Fake It to Fit in

Imagine you are applying for a job you really want. Perhaps you greatly admire the hiring organization. Perhaps you’d pick up a lot of valuable job skills.  Maybe the compensation and benefits package are very attractive, or maybe you just need to pay your bills.

As part of the hiring process, many organizations require applicants to complete personality assessments. The goal is to measure an applicant’s character and traits to see whether he or she is a good “fit” with the job. For example, if the job involves social interactions or teamwork, a more extraverted applicant would be perceived as a better “fit.” If the job requires attention to detail, as in accounting, a more conscientious candidate will be preferred.

Such assessments are also used to see whether an applicant’s values and personality are compatible with the organization’s core values in order to ensure a “cultural fit.” For instance, an organization known for being competitive may look for people who are less humble or forgiving and for those who are willing to “do whatever it takes” to outperform others and succeed.

Coming back to your job application, what if you realize—upon applying for the much-desired job and being asked to complete a personality assessment—that your profile does not perfectly match what you think the organization is looking for? What if you are truly a humble and compromising person, but after some online searching, you realize that the company’s culture is very competitive? You face a dilemma because you really want the job, but you realize that your personality and the company’s culture don’t match.

What would you do? Would you remain honest even if this means that you will probably be eliminated from the applicant pool? Or would you present yourself as the kind of person you thought the company wanted to hire?  

This is exactly the situation we examined in our research. In a series of studies, we put people in a job selection scenario where they played the role of an applicant for a desirable job. We presented them with information about the culture of the hiring organization. In some studies, we depicted the company as having a competitive (vs. less-competitive) culture, and in other studies an innovative (vs. less-innovative) culture. Across studies, we used descriptions of both real companies from the Fortune 500 list and companies that we made up. The organizational culture information was presented either via an email supposedly from a friend currently working at the company or through a series of publicly-available employee reviews.

We then asked our research participants (about 1,500 adult U.S. residents) to complete a personality assessment as part of the selection process for the organization we described. Then, a few weeks later, we asked them to complete the same assessment honestly, so that we could determine whether they distorted—or even faked—their responses to appear to be a better “fit” with the organization and, if so, how and how much. In a final study, we also asked actual job applicants about their experiences.

Overall, we found that applicants distorted their responses to “fit in” and increase their chances of success in the selection process. Specifically, people systematically presented themselves as being less humble, honest, or forgiving than they actually were when applying to organizations known to have a culture that emphasizes competition. And, they presented themselves as being more imaginative, expressive, and risk-taking when applying to organizations that promoted an innovative culture. Notice that for some of these traits, people were willing to describe themselves in terms that are not usually considered flattering (not so honest, not so forgiving) if they felt it would help them get a job.  We also found that applicants engaged in such distortions even when they could choose the organizations where they wanted to apply from a list of well-known companies—with different cultures and from different industries.

From the applicants’ perspective, such behaviors are highly adaptive: people are displaying a personality profile that closely aligns with what (they perceive) the hiring organization is looking for in order to increase their chances of getting the job!

From the organization’s perspective, the impact can be both positive and negative. If organizations choose employees based on a “faked” cultural fit, they might overlook other applicants who were truly a better match. And, they might end up hiring people who are not the best performers, who are likely to be less satisfied, less committed employees. And, getting a job that requires a person to be inauthentic may create discomfort.

Yet, individuals who are able to identify what the company is looking for and adapt their responses accordingly might possess valuable skills and social competencies. Those skills may be useful for doing the job and thus also be beneficial for the company. Whether a little faking usually proves to be a good or a bad thing for people who get the jobs they hoped for is a good question for future research.  


For further reading

Roulin, N., & Krings, F. (2020). Faking to fit in: Applicants’ response strategies to match organizational culture. Journal of Applied Psychology, 105(2), 130–145. https://doi.org/10.1037/apl0000431

 

Dr. Nicolas Roulin is an Associate Professor of Industrial-Organizational Psychology at Saint Mary’s University (Canada), who studies applicant impression management and faking, the use of technology in hiring, and employment discrimination.

Dr. Franciska Krings is a Professor of Organizational Behavior at the University of Lausanne (Switerland), who studies workforce diversity and discrimination, biases in personnel decision making, social justice, and (non) ethical behaviors.

 

Being Ostracized Can Make You a Better Lie Detector

All of us are occasionally ostracized by other people. Children don’t allow other kids to play with them, coworkers don’t invite a colleague to join them for lunch, and friends sometimes don’t include us in their plans for the weekend.

Being ostracized is a strongly negative experience with detrimental effects on people’s well-being because ostracism threatens people’s fundamental need to belong. So, ostracized people are motivated to regain belonging and prevent further exclusion by identifying people with whom they might form positive and lasting friendships. But the search for good friends is complicated by the fact that other people often lie about themselves. How can you tell if someone would be a good friend if you don’t know whether the things they’re telling you about themselves are true?

People usually aren’t very good at detecting lies, partly because they have misconceptions about the nonverbal behavior of liars and truth-tellers. For example, people around the world believe that liars avoid making eye contact with the person they’re lying to, but this nonverbal cue isn’t actually related to lying. However, common beliefs about verbal cues to deception tend to be much more accurate. Research shows, for example, that lies are less plausible and include fewer relevant details than truthful statements. Therefore, people often are better at detecting lies if they focus on verbal rather than nonverbal cues.

Interestingly, research has shown that people rely less on nonverbal cues in judging someone’s truthfulness when they think carefully about what the person is saying. So, when people really pay attention, they tend to ignore cues that are typically not useful for detecting lies (such as whether the person is avoiding eye contact) and focus instead on cues that are typically more revealing (such as how plausible the person’s statements are and how many details are included).

Given that being ostracized motivates people to pay attention to whether other people would be good friends, my colleagues and I thought that ostracized people might rely more on what people say rather than on their nonverbal behaviors, particularly if the other person is saying things that are relevant to their potential to be a good friend. If so, ostracized people should have an advantage in detecting lies when other people say things that are relevant to friendships.

We conducted three experiments in which participants were either included or ostracized in an online game. After being included or ostracized, participants watched videotapes of people talking about themselves and rated the degree to which they thought that those people were telling the truth.

In Experiment 1, participants saw videotapes in which people made deceptive or truthful statements about their movie or TV series preferences, information that is relevant to friendships because friendships are often based on shared interests and types of humor. As we expected, our results showed that participants who had been ostracized were better at discriminating between friendship-relevant lies and truths than included participants.

In Experiments 2 and 3, we directly tested whether ostracized people rely more on what someone says rather than on how someone behaves when judging a person’s truthfulness. We created messages in which an actress made statements that were either high or low in plausibility and also behaved in ways that people believe indicate either truth or lying, such as making more or less eye contact.

As we expected, ostracized participants neglected nonverbal cues to deception when judging the truthfulness of what the actress said. Whereas participants who had been included used both verbal and nonverbal cues to judge whether the actress was lying, participants who had been ostracized based their credibility judgments only on verbal cues. Experiment 3 further showed that ostracized participants ignored nonverbal cues only if the messages contained information relevant to friendships but not if messages contained other kinds of social information.

In all of our experiments, participants didn’t expect to interact with the people whose truthfulness they were judging. So it wasn’t that ostracized people were trying to figure out if they should be friends with a particular person. Instead, ostracized people simply seemed to pay extra attention to information about whether someone might be a good friend. This extra attention to what someone says increases ostracized people’s chances of distinguishing honest from dishonest people, which can help them identify potential friends.


For Further Reading

Eck, J., Schoel, C., Reinhard, M.-A., & Greifeneder, R. (2020). When and why being ostracized affects veracity judgments. Personality and Social Psychology Bulletin, 46, 454-468. doi:10.1177/0146167219860135

DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70, 979-995. doi:10.1037/0022-3514.70.5.979

DePaulo, B. M., Lindsay, J. L., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129, 74-118. doi:10.1037/0033-2909.129.1.74

 

Jennifer Eck is a postdoctoral researcher at the University of Mannheim, Germany, and co-editor of the book Social Exclusion: Psychological Approaches to Understanding and Reducing Its Impact.

Overclaiming Knowledge Predicts Anti-Establishment Voting

Washington, DC / Amsterdam, Netherlands - In light of the election and ballot victories of populist, anti-establishment movements, many people have been trying to better understand the behaviors and motivations of voters.  Studying voter behavior on an EU treaty, social psychologists in the Netherlands found that knowledge overclaiming predicts anti-establishment voting, particularly at the radical right.

The results of their research is published in the journal Social Psychological and Personality Science.

“Politicians and citizens with strong anti-establishment views, including populist movements, often articulate their views with high confidence.” Notes van Prooijen. “This research puts that confidence into perspective and suggests that it may often be overconfidence.”

Blaming the establishment apparently is a cognitively “easy” way of making sense of the problems that society faces, write Jan-Willem van Prooijen (VU Amsterdam) and Andre Krouwel (VU Amsterdam), coauthors of the study. They note that this occurs for both the political left and the political right, though it tends to appear stronger for the radical right.

Van Prooijen and Krouwel measured and analyzed voter knowledge and behavior before and after an April 6, 2016, Dutch vote that supported or opposed a European Union (EU) treaty. The treaty was a decision on establishing stronger political and economic connections between the EU and Ukraine.

Questions were sent to a panel of voters 6 weeks before the referendum, and people were asked to rate themselves on their understanding of the treaty, as well as answer factual questions about the referendum, and a survey on their political views. A total of 13,323 people completed the questionnaire.

Two days after the vote, Van Prooijen and Krouwel followed up with a second round of questions, asking whether people voted in the referendum and how they voted, with the results kept anonymous. This group of consisted of 5568 people from the original panel who voted and 2044 people who had not voted.

Logistic regression slopes and 95% confidence intervals of anti-establishment voting as function of self-perceived understanding of the treaty, actual knowledge of the treaty, and general overclaiming

Comparing the responses with voter behavior and political leanings, they found that for each measurement point of self-perceived knowledge, the anti-establishment vote becomes 1.62 times more likely. Yet, an increase in actual knowledge decreases the likelihood of the anti-establishment vote by 0.85 per measurement point.

“The study does not show that anti-establishment voters are somehow less intelligent, or less concerned with society,” says van Prooijen. “Future research may reveal whether the discrepancy between self-perceived understanding and actual knowledge is due to being uninformed or due to being misinformed.”


Citation: van Prooijen, Jan-Willem; Krouwel, Andre (2019). Overclaiming Knowledge Predicts Anti-Establishment Voting. Social Psychological and Personality Science.

Social Psychological and Personality Science (SPPS) is an official journal of the Society for Personality and Social Psychology (SPSP), the Association for Research in Personality (ARP), the European Association of Social Psychology (EASP), and the Society for Experimental Social Psychology (SESP). Social Psychological and Personality Science publishes innovative and rigorous short reports of empirical research on the latest advances in personality and social psychology.

How to Become a Better Lie Detector: Focus on Feelings

People lie on a daily basis. Despite this, most lies remain undetected. Why? Research suggests two main reasons. The first is that people simply are too gullible: We all tend to believe than disbelieve information.  The second is that people don’t understand how liars behave. So how can we spot when others are lying?

Making people less trusting and extensive training programs both have limitations. There may be an alternate, and easy, way to improve spotting a liar.  Our research suggests that instead of asking yourself ‘Is this person lying?’ you should ask yourself ‘What emotions is this person really feeling?’ Below, we explain how and why.

A key factor to understanding our method of lie detection is realizing that lies often involve concealing our  feelings.  In particular, when we lie, we often are trying to hide our real emotions.  We tell other people that we are sorry for our behavior when we are not. We say that we are disappointed we missed their call when we are not.  We pretend that we are happy to do something when really we are not. In other words, we lie about all sort of things, but lies often involve misrepresenting how we feel.

Because lies often involve faking emotions, focusing on emotions may help us detect lies. Instead of asking yourself ‘is this person lying?’, ask yourself “what emotion is this person really feeling?”

We examined our theory in two studies.  We tested whether the way people assess deception (directly by asking whether someone lied, and indirectly by asking what emotion the person experienced) influences their ability to detect deception. In both studies, participants watched videos of individuals who were lying or telling the truth about their emotions. After each video, participants were asked “to what extent do you think this person is lying?,” and they were asked to rate the extent to which the person on the video was experiencing specific emotions.

The results of both studies showed that participants were poor at detecting deception when asked directly whether the person was lying. In fact, lying individuals were even considered more truthful than truth-telling individuals! So not only were the participants bad at detecting lies, they even seem to “convict” the wrong person.

Interestingly, and in agreement with our predictions, participants were far better when they were simply asked about the emotions the other person had experienced. Specifically, participants correctly ascribed more negative emotions to those individuals who truly felt negative as compared to those who faked feeling negative. Participants weren’t as able to detect who was faking positive emotions.

We explain our findings as follows: liars make errors in their facial expressions when lying about negative emotions. These errors are picked up only when observers focus on these emotional cues. If you ask yourself “is this person lying,” you are likely to get the wrong answer.  If you ask yourself “is this person feeling the emotion their facial expressions and words imply they should be feeling,” you are more likely to detect the lie. In particular if the lie involves faking negative emotions (such as “I am so sorry I missed your call!”)

Our results imply that if you would like to be better at detecting emotional deception, you should ask yourself which emotions you think the other person experiences. If the person is faking feeling bad, you’ll be better able to figure it out!



For Further Reading:

Stel, M. & Van Dijk, E. (2018). When do we see that others misrepresent how they feel? Detecting deception from emotional faces with direct and indirect measures. Social Influence, 137-149

DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70, 979-995.

Porter, S., & ten Brinke, L. (2008). Reading between the lies: Identifying concealed and falsified emotions in universal facial expressions. Psychological Science, 19, 508–514.

Street, C. N. H., & Vadillo M. A. (2016). Can the unconscious boost lie-detection accuracy? Current Directions in Psychological Science, 25, 246-250.

 

Authors

Mariëlle Stel is an associate professor at Twente University, The Netherlands. Her research focuses on (non)verbal communication.  Eric van Dijk is a professor at Leiden University, The Netherlands. His research concentrates on the understanding of economic and social decision making.

Tick Tock: Commitment Readiness Predicts Relationship Success, Say Scientists

Washington, DC - Timing is everything, goes a popular phrase, and this is also true for relationships. As Valentine’s Day approaches, social psychologists from Purdue University offer new research showing that a person’s commitment readiness is a good predictor of relationship success. The results are published in Social Psychological and Personality Science.

“Feeling ready leads to better relational outcomes and well-being,” says Chris Agnew, Professor of Psychological Sciences and Vice President for Research at Purdue University, “When a person feels more ready, this tends to amplify the effect of psychological commitment on relationship maintenance and stability.” 

The reverse is also true, based on the results from the study; when a person feels less ready for commitment while in a relationship, they are less likely to act in ways to support that relationship.

Assessing readiness for commitment

Agnew and colleagues Benjamin Hadden and Ken Tan report the results from four studies and five independent samples, focusing on reported readiness and commitment to an ongoing relationship, how much people were willing to be involved in the day to day behaviors that help maintain a relationship, and the ultimate stability of those relationships. 

Initially, they surveyed over 400 adults in committed relationships, assessing their sense that the current time was right for the relationship (i.e., their commitment readiness), their satisfaction with the relationship, and their investments in it. They found a robust correlation between current sense of readiness and one’s commitment level.

To follow up this initial study, Agnew and colleagues ran studies with university students, first in an initial assessment with over 200 students, and then as follow-ups with some participants five and seven months later to see who was still together. 

Based on their results, being “commitment ready” was a key predictor of both success and failure. Greater readiness predicted lower likelihood of leaving a relationship. Those feeling greater readiness to commit were 25% less likely to breakup over time.

People who reported being highly committed to their current partner but didn’t feel that the current time was best for them to be in a relationship were also more likely to end a relationship than their peers who expressed greater readiness. And those who were commitment ready were more likely to do the day to day work needed to maintain the relationship.

When do people feel ready to commit?

Feeling ready to commit to a relationship at a given time depends on the individual, says Agnew, “People’s life history, relationship history, and personal preferences all play a role. One’s culture also transmits messages that may signal that one is more or less ready to commit.”

 


Study Agnew, Christopher R.; Hadden, Benjamin W.; Tan, Kenneth (2019). It’s About Time: Readiness, Commitment and Stability in Close Relationships Social Psychological and Personality Science, publishing online before print in February 20, 2019.

Social Psychological and Personality Science (SPPS) is an official journal of the Society for Personality and Social Psychology (SPSP), the Association for Research in Personality (ARP), the European Association of Social Psychology (EASP), and the Society for Experimental Social Psychology (SESP). Social Psychological and Personality Science publishes innovative and rigorous short reports of empirical research on the latest advances in personality and social psychology.

Single? Agnew and colleagues published research on singles and commitment this past spring.  You can find more here: Hadden, B. W., Agnew, C. R.., & Tan, K. (2018). Commitment readiness and relationship formation. Personality and Social Psychology Bulletin, 44, 1242-1257.

To Catch a Liar

Pinocchio grew a long nose when he lied.  Although few people expect to see a nose visibly change when someone lies, many believe that nonverbal behaviors provide clues to a person's honesty. A nose scratch. A mouth cover. Hang wringing. Leg tapping. Over the years, different claims have been made about these and other nonverbal behaviors as possible cues to deception. Do these nonverbal "tics" really mean that a person is lying?

Deception detection is a huge area of behavioral science and can inform such questions. For example, many people believe liars avoid eye contact when lying. But looking away when answering questions (gaze aversion) has not been scientifically validated as a sign of deception. The idea that people avoid eye contact when lying is more a myth than reality, a myth that is believed around the world despite being largely disproved by science. In fact, liars are more likely to look interviewers in the eye when answering in order to appear honest

What Does the Research Show?

For decades, many prominent researchers searched for the single behavioral tell that was a cue to deception. But a seminal meta-analysis published two decades ago provided ample evidence that there is no one, single telltale sign of lying. Pinocchio doesn't exist.

Research and application in this area is difficult because the verbal and nonverbal indicators of veracity and deception are complex, varied, and unique to individuals. Cues don't always mean the same thing in different people or even in the same person at different times.

Furthermore, nonverbal behaviors can signal many different mental states beyond veracity and deception, such as discrete emotions (anger, disgust, or fear), general affective states (open or closed, relaxed or tense), specific verbal words or phrases, and cognitive processes (confusion, concentration).

Fortunately, many studies published since that meta-analysis have demonstrated that nonverbal behaviors can differentiate truthtellers from liars. These studies diverge from earlier research partly because they approximate real-life situations in which catching people lying is important and has real consequences.

The research to date points to this conclusion: cues to deception (and veracity) do exist and can occur in multiple channels of behavior—face, hands, body, verbal style, or verbal content. Specifically, certain nonverbal behaviors have been scientifically validated as deception indicators while others have not. Facial expressions of emotion and microexpressions, some gestures, fidgeting (in some contexts), and some aspects of voice differentiate truthtellers from liars. Moreover, some behaviors are indicators of veracity while others are indicators of deception, and all behaviors must be interpreted in context.

Using the Science to Improve Understanding of Others

How can people leverage this area of science in their daily lives? First, learn about the world of nonverbal communication and nonverbal behavior, and all the various mental states that are signaled by which behaviors. For example, specific facial expressions of emotion can be signaled by face and voice; specific cognitions and cognitive processes can be signaled by face, voice, and gesture; and general affective states can be signaled by specific body movements. Important information can be gleaned from others without them speaking. With words, nonverbal behaviors can complement, supplement, qualify, and contradict words. Learning to read and interpret these behaviors well can provide important insight into the minds of others and improve communication overall, aside from making judgments of veracity or deception.

People who want to detect deception could learn how to read microexpressions—extremely brief (faster than half a second), involuntary, unconscious facial expressions of emotion. Most people don't see them, and those who see them see something but don't know how to interpret them. Microexpressions signal concealed, suppressed, or repressed thoughts and feelings. Being able to identify microexpressions when they occur can provide people with important cues to others' mental states beyond the spoken word.

Learning to ask good questions and to listen more than one talks is critical, as is honing active listening and observational skills. When trained on scientifically validated indicators, professionals and laypersons alike get better at detecting deception. People can learn to be more sensitive to other people's states of mind, including when they are telling lies or truths, by listening more and paying attention to their deeds as well as their words.


For Further Reading

Matsumoto, D., & Wilson, M. (2023). Behavioral indicators of deception and associated mental states: Scientific myths and realities. Journal of Nonverbal Behavior (Special Issue on Innovations in Nonverbal Deception Research). https://doi.org/10.1007/s10919-023-00441-w

Matsumoto, D., & Wilson, M. (2023). Incorporating consciousness into an understanding of emotion and nonverbal behavior. Emotion Review15(4), 332-347. https://doi.org/10.1177/17540739231163177

Frank, M.G., Svetieva, E. (2015). Microexpressions and deception. In: Mandal, M., Awasthi, A. (eds) Understanding Facial Expressions in Communication. Springer. https://doi.org/10.1007/978-81-322-1934-7_11


David Matsumoto is Professor of Psychology at San Francisco State University and Director of Humintell, a company that engages in research, consultation, and training in investigative interviewing, threat assessment, social influence, and cross-cultural adjustment.

Why is Moral Hypocrisy So Pervasive?

Hypocrisy is often viewed negatively and reflects poorly on moral character. Yet hypocrisy is also common in our social world. People elect hypocrites to office, they buy products from hypocritical companies, and they may even behave hypocritically in everyday social interactions. If hypocrisy is so negative, why are hypocritical individuals, leaders, and organizations supported and trusted? We address this question in recent research examining the social consequences of hypocrisy in the context of honesty.

Studying hypocrisy in the context of honesty is interesting because people hold divergent private and public beliefs about honesty. Many people promote absolute norms of truth-telling ("honesty is the best policy") publicly, but people also admit that they do lie in everyday life and even find lying to be ethical at times. For example, many people think it is sometimes okay to lie to protect another person's feelings. If people frequently lie and think lying can be appropriate at times, but simultaneously support policies of absolute honesty publicly, at least some hypocrisy is taking place. In our work, we look at judgments of hypocrites who promote absolute norms of truth-telling—that it is never okay to lie—but then lie, compared to judgments of those who admit to holding flexible views on honesty (it is sometimes okay to lie) and then lie.

Across six studies, we find that people evaluate hypocrites who endorse absolute honesty and then lie more positively than consistent communicators who take flexible stances on honesty prior to lying. People view the hypocrites as more moral and are more willing to trust them. In political scenarios, people indicate stronger voting intentions for hypocritical politicians who endorse absolute honesty prior to lying than those who take flexible honesty stances and tell the same lies. Even people who admit that they believe it is sometimes okay to lie, still evaluate others who endorse absolute honesty and lie more positively compared to those who take nuanced views on lying. Why is hypocrisy viewed more positively than flexibility in the context of honesty?

We argue that absolute honesty stances are viewed as indicating stronger commitment to honesty and greater likelihood of future honest behavior. If a communicator says it is never okay to lie, people interpret this stance as genuine. When the communicator then lies, hypocrisy does slightly diminish expectations of future honesty, but not greatly. Although hypocrisy is often viewed so negatively that it discredits people's words completely, we find that people do not entirely discredit hypocritical honesty stances. Absolute honesty stances are not dismissed when communicators lie—they are still viewed as a reliable signal of communicators' values and future behaviors regarding honesty.

In fact, we found a strong "moral flexibility penalty" throughout our research. Admitting that lying is sometimes okay leads to low moral evaluations and expectations of future honesty, regardless of one's behavior. This flexibility penalty is severe enough that hypocritical communicators are evaluated more positively and trusted more than the communicators endorsing flexible honesty when lying, even though the flexible stance accurately describes and aligns with deceptive behavior.

Furthermore, we find that communicators anticipate these results themselves. In one study, we surveyed local government officials across the United States and found that officials predicted that their constituents would trust them more for hypocritically endorsing absolute honesty and lying than if they had admitted to holding flexible honesty views and lied. In other words, these officials expected that moral flexibility would be more costly than hypocrisy.

As long as people anticipate the severe social costs of acknowledging moral nuance, they are likely to endorse absolute honesty stances publicly—even when these stances do not actually reflect their private beliefs or behaviors. By highlighting this tension, our research helps to explain why hypocrisy is so pervasive. Hypocrisy still has negative consequences but an alternative to hypocrisy—admission to moral nuance—is even worse.


For Further Reading

Huppert, E., Herzog, N., Landy, J. F., & Levine, E. (2023). On being honest about dishonesty: The social costs of taking nuanced (but realistic) moral stances. Journal of Personality and Social Psychology. https://doi.org/10.1037/pspa0000340

Jordan, J. J., Sommers, R., Bloom, P., & Rand, D. G. (2017). Why do we hate hypocrites? Evidence for a theory of false signaling. Psychological Science, 28(3), 356-368. https://doi.org/10.1177/09567976166857

Jordan, J., & Sommers, R. (2022). When does moral engagement risk triggering a hypocrisy penalty? Current Opinion in Psychology, 101404. https://doi.org/10.1016/j.copsyc.2022.101404


Elizabeth Huppert is a postdoctoral fellow at the Dispute Resolution Research Center (DRRC) at the Kellogg School of Management at Northwestern University researching moral judgment.

The Personal Costs of Profitable Lies

Would you lie if you knew it would help you and were guaranteed to get away with it? People often lie to get what they want. For instance, people lie about themselves to attract dates, their credentials to land jobs, and alternative buyers to enhance their negotiating position. And at least in negotiation contexts, these lies often pay off.

Getting away with lies often comes with material rewards, but does it pay psychologically? On the one hand, research shows that some forms of deception, such as successfully cheating on a task, can feel thrilling. On the other hand, misleading another person might taint the thrill of gaining material rewards if people dwell on their deceptive tactics. To reconcile these competing predictions, we conducted studies to identify the psychological consequences of duping others for material gain. We studied this issue in the context of buyer-seller negotiations, where lies are especially rampant.

In one study, for example, participants acted as a seller in a negotiation over a computer and stood to earn more money the higher the sale price. Half the participants interacted with a buyer unaware of a defect in the computer, presenting the seller with an opportunity to lie; most of those sellers (74%) did lie to get a better price. The other half of the participants were randomly assigned to a control condition where the buyer was aware of the defect, eliminating the opportunity to lie.

On average, participants who seized the opportunity to lie earned more money in the experiment. But, despite making more money, they were less satisfied with the negotiation experience and their outcome than sellers who did not have the opportunity to lie. Three additional studies revealed a similar pattern. Undetected lies carried psychological costs even when they allowed negotiators to profit economically and remained hidden from counterparts. Despite the monetary benefits, deception elicited more guilt than positive emotion in the deceiver and tainted the deceiver's negotiation experience.

Another Cost: Relationship Damage

Our research also revealed another cost exacted by undetected lying: damage to the relationship with the victim. In another study, participants again negotiated over the sale of a computer with a counterpart buyer whom they either did or did not have the opportunity to deceive. This time, however, participants engaged in a follow-up negotiation with the same counterpart in a context that did not present them with an opportunity to lie. Despite their earlier lies remaining concealed, participants' earlier lies colored their subsequent interaction with that counterpart. Deceptive participants were less satisfied with the second encounter with the same counterpart than participants without the opportunity to lie in the first negotiation. Further, they were less likely to choose to interact again with that counterpart on a nondescript third task. In other words, they opted out of the relationship faster. These findings indicate that undetected dishonesty can damage relationships in the eyes of the deceiver, undermining their satisfaction in future encounters with the same counterpart and leading them to avoid future interactions with that counterpart. In a final study, we replicated this pattern in a context where negotiators acted as agents and were "just following orders" given to them by a client.

What If There is a Financial Windfall?

We reasoned that large payoffs for lying might ease any psychological burden from acting dishonestly if one gains a lot of money from the lie. To test this reasoning, one of our studies compared a small incentive to lie to one 12.5 times the size. A larger incentive size did not eliminate guilt. On the contrary, guilt emerged regardless of incentive size, and the large incentive made participants feel even guiltier.

Does the Liar's Own Character Matter?

Presumably, people who lack moral character—they struggle with self-control, fail to empathize with others, and don't consider morality fundamental to their self-concept—should be relatively comfortable with telling lies. We measured those three traits associated with moral character. Yet those traits did not impact the amount of guilt induced by lying. Even participants who exhibited relatively low levels of the moral character traits we measured felt worse when they lied than when they did not have the opportunity to do so.

The bottom line is that self-serving lies incur psychological costs and can damage relationships, even when they go undetected in a competitive context like a negotiation. Next time you are tempted to lie, the psychological consequences might be worth considering. You could get away with the lie, but the psychological and relational costs might outweigh any material benefit.


For Further Reading

Van Zant, A. B., Kennedy, J. A., & Kray, L. J. (2022). Does hoodwinking others pay? The psychological and relational consequences of undetected negotiator deception. Journal of Personality and Social Psychology. https://doi.org/10.1037/pspi0000410


Alex B. Van Zant is an Assistant Professor in the Department of Management and Global Business at Rutgers Business School–Newark and New Brunswick, Rutgers University. His research examines how people can enhance their ability to manage impressions in interpersonal interactions and make decisions more consistent with their ethical values.

Jessica A. Kennedy is an Associate Professor of Management at Vanderbilt University.  She holds a PhD from UC Berkeley and a B.S. from the University of Pennsylvania, and her research investigates potential conflicts between ethical values and career outcomes.

Laura J. Kray is a professor at UC Berkeley's Haas School of Business in the Management of Organizations group. Her research examines the role of gender stereotypes and mindsets on workplace behavior, including negotiations and ethical decision-making.