Putting the Pieces Together: Coherence of Evidence in Decision Making
The next generation of decision making theorizing is here.
Daniel Kahneman and Amos Tversky demonstrated that human decision-making is not entirely rational. It can be, in fact, decidedly irrational—riddled with biases like holes in Swiss cheese.
Gerd Gigerenzer followed up on their work from an evolutionary perspective, arguing that there might be some underlying logic to these biases. Perhaps we aren’t “general purpose” reasoners with a single set of logical rules for making decisions, but we instead have an “adaptive toolbox.” These simple rules for making decisions aren’t deep and complex—and can lead to biased, irrational thought—but they are useful for making decisions in certain environments. In Gigerenzer’s account, whenever we need to make a decision, we pull out a simple tool (or decision rule) that works for our situation.
Now, Andreas Glöckner is putting forward a new model for decision-making—one not based on logical holes or on a set of decision-making tools—but one based on coherence of evidence. In his talk today at the Dynamic Systems and Computational Modeling preconference, “The Parallel Constraint Satisfaction Model for Decision-Making (PCS-DM): Review of Findings and Modeling Demonstration,” Glöckner presented an explanation for how individuals make decisions that changes the emphasis of prior approaches.
Glöckner’s is based on the idea that decisions are based on many different pieces of evidence, each of which is a more or less reliable indicator of the real world. For example, an individual asked “Which city is bigger, San Antonio or Sacramento?” might use cues like “whether the city is from a bigger or smaller state” or “how often they have heard someone mention the city” to make the decision. Knowing the size of the state or knowing how often people mention the city have some actual validity.
A series of cues may or may not all point in the same direction. Further, different cues might be weighted at different strengths—it’s more important to know that people talk about a city a lot than what state it’s in. In Glöckner’s model, people making decisions aren’t just using the best cue or adding them up—they’re integrating across them all.
This process is iterative, and people bounce rapidly between considering cues, considering options, and back again. Ultimately, they converge on a single decision—the most coherent decision given the evidence.
The test of Glöckner’s model—called a parallel constraint satisfaction (PCS) model—is in how well it can predict the responses of individuals making decisions in experiments. And by that metric, it is an improvement over all the others to which it is compared.
The PCS Decision Making model can predict what decision an individual will make with over 90% accuracy (up to 97%, depending on the experimental condition), but it can also make new predictions: how fast individuals make decisions and how confident they are in their final decisions. When presented with more coherent evidence, individuals make decisions faster, and they are more confident in them.
Another classic social psychological finding falls out of this framework: cognitive dissonance. Cognitive dissonance theory suggests that individuals feel an unpleasant arousal when they hold beliefs that are in conflict. This arousal motivates them to find a single set of beliefs without conflict.
This process matches Glöckner’s coherence-based account of decision making. Further, Glöckner reviewed evidence that people feel more arousal when they are presented with evidence that is incoherent. The coherence of cues may be a more sophisticated way of calculating dissonance in decision-making.
Overall, Glöckner has a decade’s worth of lab experiments examining decision-making comparing his coherence-based model to all the other major theoretical approaches. Time and again, he compares his model’s predictions to those of other theories—like simple associative learning, comparison of responses to exemplars, and use of multiple simple “tools”—and finds that his model fits the data better.
When we talk about decision-making in everyday life—which has become a staple of politics and public policy with the rise of “behavioral economics”—we should now consider not just how decisions can be “irrational,” but how coherent the evidence they’re being presented with is. The task we’re presented with is not just to reason through evidence—it’s to find the most coherent explanation of the evidence before us.
Alexander Danvers is a graduate student in social psychology at Arizona State University. He studies emotions and social interaction from a dynamic systems perspective.