As seen in Psychology Today.

Misinformation is endemic in our society, but it is not a new problem. There has never been a shortage of either false or inaccurate information in this world, nor has there ever been a want of misguided beliefs based on misinformation. In most cases, we tend to become more adept at recognizing false or inaccurate information. We come to understand that individuals, without ill intent, can be wrong when they relay information to us, and that corrections and amendments to previous statements or beliefs typically give us a more accurate picture of the world. We recognize that we can often be misinformed and that we need to revise and update our beliefs frequently. This is true of misinformation that is passed along without an ulterior motive, as well as digital media that is expertly designed to spread quickly, to infuriate, and to ruthlessly exploit individuals’ preconceived notions through confirmation bias.

While the latter is most certainly a hot topic, this post is not about media literacy or confirmation bias. Rather, it concerns a separate phenomenon that has been intently researched for decades known as the “continued influence effect” (CIE).

What Is the Continued Influence Effect?

As the name suggests, researchers have found that certain kinds of information following a retraction can be “sticky.” Such information, despite being recognized as false, continues to influence individuals’ reasoning and decision-making abilities. Writing in 1994, Johnson and Seifert observed that the CIE could not simply be ruled out as a simple mistake, since previous studies found that “influence can occur even when subjects have made the connection between the disregard instruction and the information it refers to.”

At the core of the CIE are difficulties in editing existing memory with updated and more accurate information. It is less about political tribalism or stubbornness and more about the persistence of misinformation in memory.

This begs the question: Why are some kinds of information more memorable than others?

The Continued Influence Effect in Real Life

Imagine if your next-door neighbor tells you that a nearby house recently burned down. He says he heard from a friend that the fire department is investigating it as an act of arson. He also informs you that the fire-damaged house is owned by a woman who recently went through a very messy divorce. You agree that it seems plausible that her ex-husband could have set the fire intentionally.

The following day, your neighbor tells you that he was mistaken. There was no arson investigation and the fire department found clear evidence of an electrical malfunction. Clearly, he says, the ex-husband was not involved with the fire.

You understand that there is no evidence to support the belief that the fire was set intentionally, and even evidence that explicitly discredits it. However, days later you catch yourself telling others that you still think the ex-husband was behind it.

This is the CIE in action.

The Continued Influence Effect in a Controlled Environment

Formal studies into the CIE are typically designed to have one control group and one experimental group. All participants read a story about a fictional event. In the experimental group, one link in the causal chain of the story is later retracted, and then replaced with updated information. In the control group, no retraction occurs.

Participants in the control group tend to have no problems accurately describing the events in the story. In the experimental group, however, the retraction usually only halves the number of references to the misinformation. This is true even if people remember the retraction and agree with its veracity.

Even more surprising, researchers have found that if they strengthen the language of the correction and clarify that the previous information was incorrect, their efforts backfire. Participants become more likely to rely on the misinformation. Similarly, if an alternative explanation is more complicated or difficult to understand than the original misinformation, participants also become more likely to rely on the misinformation.

Lest We Remember

A few competing conceptual models have been proposed to explain the CIE:

1.       If there is a gap in our retelling of a story, we will reflexively bridge it, even if we know that the bridge is constructed out of misinformation.

2.       When we retell a story, invalid and valid memories (the misinformation and its more accurate retraction) compete for automatic activation and the most seemingly reasonable option will be repeated. Oftentimes, the invalid memory/misinformation wins out.

3.       We may store the retraction as simply the original piece of information with a “negation tag” attached to it (e.g. “husband = arsonist—NOT”). That negation tag can sometimes get lost if it is not a familiar part of the story.

Regardless of which model is correct, what seems clear is that we tend to remember pieces of information as being part of a larger story, and that we tend to reflexively favor narratives that make sense to us over narratives that are either unfamiliar or incomplete. Like nature, we abhor a vacuum.

We are also far more likely to hold dear to pieces of misinformation that we have incorporated into our worldview, regardless of what that worldview is. In a sense, the story containing the misinformation becomes part of a larger puzzle, which makes it even more likely for us to believe it.

Coping with the Misinformed

When a friend or loved one has a habit of retelling a story that contains pieces of misinformation, your response may run from mildly amused to extremely annoyed. This is natural. It is also natural to become frustrated when the misinformation is repeated frequently, even if it has emerged innocently. It becomes even more frustrating when your attempts to persuade them that they are misinformed only galvanize the subject belief.

To avoid this backfire effect, some of the recommendations provided by in a 2012 paper by Lewandowsky and colleagues include:

·         Consider if your alternative explanation leaves gaps in their narrative, and attempt to fill those gaps with easy to digest, alternative explanations.

·         Emphasize the facts that you wish to communicate and avoid repeating—and thereby making more familiar—the misinformation.

·         Use simple and concise language to illustrate your point.

Finally, presenting a binary choice architecture that is insulting or condescending is simply not helpful. For example, telling someone they can either pick the “right” or the “wrong” option will inevitably lead them to double down on whichever choice they’ve already made. Instead, provide a narrative that offers contextualization and incorporates corrective information. Ultimately, the goal is not to “win” an argument, but to overcome the influence of misinformation.

An abbreviated version of this post was published on KevinMD.