Making Sense of Explanations
A research team explores the factors behind how an explanation is constructed, from validating scientific arguments to opening a window to how conspiracies form
By Stacy Kish
Researchers working on artificial intelligence write code to improve machine predictions. Humans rarely worry about predicting events. Rather, they mull over past conditions to construct explanations. But how do humans form explanations?
If the shortest distance between two points is a straight line, the best explanation must be the simplest one. Simon DeDeo and Zachary Wojtowicz of Carnegie Mellon University applied Bayes Theorem to drill into how explanations are derived and identified the factors that people use — like blocks — to construct explanations. Their work was published in the January issue of the journal Trends in Cognitive Science.
“We are interested in how people behave, and explanations are a pretty basic psychological task,” said DeDeo, assistant professor of social and decisional sciences at CMU and an external professor at the Santa Fe Institute. “A lot of time, we ask why rather than what will happen next.”
Turning to Bayes
DeDeo and Wojtowicz applied Bayes Theorem, which assesses prior knowledge to explain the probability that an event may occur, to the task. They used this deceptively simple approach to understand how “atomic” values, like co-explanation, descriptiveness and simplicity, interact to form evaluations. This approach also provides a way to predict other factors that could affect how an explanation forms.
Their work shows that explanations are a product of the underlying simple components (parsimony, concision and unification). As complexity increases, humans are sensitive to two factors — descriptiveness and co-explanation. Through this approach, DeDeo and Wojtowicz believe they can predict the existence of new values, such as revealed complexity and validity, as more evidence or information becomes available to update the hypothesis.
Their work not only supports the legitimacy of scientific arguments but also provides a way to understand abnormal phenomena, like delusions, conspiracy theories and extremist ideologies.
When Simplicity Goes Wrong
People want to find the simplest explanation for an event, but, when taken to an extreme, this approach can go wrong. According to Wojtowicz, pushing the definition of simplicity past a certain level of generality opens the door to many of the same paradoxes that plague the foundations of computation, mathematics and logic.
“As it turns out, simplicity is surprisingly complex,” said Wojtowicz, a Ph.D. student in DeDeo’s lab. “Our brain clearly deploys a very powerful and general notion of simplicity, so it seems to be toeing the line of what is possible in a delightfully subtle way that we have yet to fully understand.”
The researchers found underlying irregularities in how some people choose among explanations. According to DeDeo, people value co-explanation, because it is an umbrella approach that brings disparate items together, making explanations more palatable. Usually explanations that link things together is good but if you overvalue it, it can lead to problems.
“In the same way that unhealthy snacks hack the brain's mechanism for evaluating food by indulging our preference for salt, fat and sugar to the exclusion of other nutrients, conspiracy theories hack the brain's mechanism for making sense of the world by appealing to a limited set of explanatory values at the expense of others,” Wojtowicz said. “The question then becomes whether the conspiracy theories can mutate faster than we can demonstrate their predictions to be false or inconsistent.”
This conspiratorial mutation is further fueled by the era of social media, which accelerates the spread of misinformation to unprecedented levels. The researchers believe this study provides an additional lens to understand how pathological explanations can develop into extremist thought as well as the acceptance of conspiracy theories.
DeDeo and Wojtowicz published their work, titled “From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning,” in Trends in Cognitive Sciences. The work received support from the John Templeton Foundation and the Survival and Flourishing Fund.