Back to top

Facing Self-Delusion: Why We Don’t See The World As It Really Is

Member Content Rating: 
5
Your rating: None Average: 5 (89 votes)

http://pixabay.com

Making quick judgments based on what we see (and think we know) is part of the brain’s mechanism for helping us deal with all the information that assaults us. But in a world that’s increasingly at risk on many fronts, it’s when we learn how to question these judgments that we can best cope.

The modern world presents us with many threats to be anxious about—from natural catastrophes to those caused by human behavior. But not everyone shares the same belief about how serious those threats are and whether they can, or should, do anything about them. More often than not, we believe what we want to believe about the risk involved in any given scenario, often to our detriment.

For example, Enron, an American company that will go down in history as a poster child for self-delusion, was finally forced to face reality and file for bankruptcy in 2001, costing the US economy billions of dollars.

In 2012, a Canadian study assessed the health of more than 45,000 adults. Of those who were at the highest risk for heart attacks (five or more factors), nearly 18 percent dismissed the idea that they needed to take steps to reduce their risk.

A 2016 Yale study suggested that, even though 70 percent of Americans believe global warming is happening, more than 50 percent believe it will harm them very little, if at all.

There’s no question that different people weigh risks differently. This is clear simply from the daily clamor of public-policy arguments over such topics as environmental pollution or reducing gun violence. But in an age when information abounds and is available to anyone with an Internet connection, how is it that we come to very different conclusions about how to weigh the evidence and judge the risk of any given threat? If it were simply a matter of logic, surely we could all agree on how to respond to the risks we face, whether local or global. But as it stands, we can’t even always agree on what those risks are.

“Those who deal with major disaster scenarios say that if you have the threat of a tsunami, for example, typically half the people will say, ‘It’s never going happen to us.’”  - Peter Townsend, Vision interview (2017)

Experience, as well as research, teaches us that the human brain sometimes overblows trivial risks while dismissing or minimizing hazards that, if considered rationally, should alarm us at least as much if not more. To use an ancient metaphor, you might say that we strain out gnats while swallowing camels. Why do we do that?

The truth is, the human brain regularly uses shortcuts (known as heuristics), usually without our conscious participation, which sometimes cause us to miss important considerations. Nobel Prize winner Daniel Kahneman, who has long studied human decision-making, published a number of early papers on the subject with his late colleague Amos Tversky. To help us understand how we employ these shortcuts, he sketches a useful model of two systems of thinking, underscoring that neither system acts alone.

System 1, he writes, “operates automatically and quickly, with little or no effort and no sense of voluntary control.” System 2, on the other hand, “allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.” As System 2 puts effort into learning, some of what it learns is handed over to System 1; for instance, driving a car or riding a bike become System 1 activities, given enough practice.

It’s System 1 that leaps into action to save us time when we need to make quick decisions or fill in gaps of information or meaning. As Kahneman describes it, System 1 generally does this by answering an easy question in place of a harder one. This can be useful in many respects, but left unchallenged, it also presents a hazard to our thinking that can prevent us from recognizing serious risks.

The good news is that System 2 can certainly step up to offer the necessary challenge when we think of asking it to pay attention. The bad news is that we aren’t always aware when we’re in a situation where System 1 has taken the lead. And however brilliant or well read we may be, we are each susceptible to using the same mental shortcuts as everyone else.

Knowing that our own mind plays tricks on us, affecting our perception of risk and our predisposition in weighing evidence, how can we hope to respond to the serious challenges we face on a global scale? What, if anything, can we do to minimize the effect these built-in thinking shortcuts may have on how far we’re led astray?

Surely the place to begin is to become aware of the potential pitfalls.

Shortcuts Can Be the Pits

Paul Slovic is a research professor at the University of Oregon and founder of Decision Research, an organization that studies the science behind human judgment, decision-making and risk. Slovic has been researching and publishing on this since the 1960s. One area where he and his colleagues have contributed heavily to our understanding of human decision-making and risk perception has been dubbed the affect heuristic.

Affect, in psychological jargon, refers to the subjective emotions or feelings a person displays; you might think of it as a mood state. Someone who has a negative affect might be experiencing a distressing emotion such as sadness, anxiety, anger or irritation. Someone with a positive affect might be feeling interest, joy, enthusiasm, alertness—a positive frame of mind in general.

While all mental shortcuts involve an element of feeling, we know this particular one is in play when we evaluate a risk based on the negative or positive feelings we associate with it. System 1 substitutes “What do I think about this?” with an easier question: “How do I feel about this?” When we feel good about something, whether or not because of personal experience, its benefits seem to loom larger than its risks. If we have negative feelings about it, the risks seem to outweigh the benefits.

This shortcut works well enough when we can trust the accuracy of our feelings. Unfortunately, this isn’t always the case. We might be attaching a positive or negative feeling to an image, experience or object that doesn’t warrant it. Advertisers manipulate us into doing this all the time. Thanks to System 1, we also view our favorite sports team or political party on the basis of the positive feelings we came to the table with. We may think our views arise from a logical assessment, but if we find ourselves with an immediate gut response to something that warrants a longer look, it generally means System 1 is running the show.

Of course, this is only one of many shortcuts in our arsenal for evaluating risk. Kahneman and Tversky’s early work concentrated on several, including the representativeness heuristic and the availability heuristic.

The first describes our tendency to expect small samples to be representative of a larger group. Let’s say we’re told that Rob wears glasses and tweed, reads a lot, is somewhat eccentric, and can rattle off unusual facts with surprising ease. We’re asked to predict whether he’s a farmer or a university professor. Kahneman and Tversky found that most of us would guess he’s a professor based on how representative he is of the professor stereotype. This holds true even when we know there are many more farmers in his town than professors, a factor that should certainly carry weight. It’s easy to see how this shortcut can lead us down the wrong path, especially when we use it to make judgments about a whole class from a small sample. For instance, if the media reports mostly on Muslims who are terrorists and we don’t know very many Muslims personally, we might conclude that most Muslims are terrorists—despite data indicating that only 46 Muslim Americans (out of 3.3 million) were linked with violent extremism in 2016. And that number was down from the previous year.

The availability heuristic may seem somewhat similar, but it’s more about frequency than representation. Kahneman and Tversky defined it as “the process of judging frequency by the ease with which instances come to mind.” In other words, we judge whether an event might happen based on how easily we can recall a similar event from the past or imagine a future one. So if we can think of several friends who divorced when they were in their 60s, we’re likely to judge that age as a common one for divorces—even though it may be rare across the whole population.

“WYSIATI [What You See Is All There Is] means that we use the information we have as if it is the only information. We don’t spend much time saying, ‘Well, there is much we don’t know.’” - Daniel Kahneman, “A Machine for Jumping to Conclusions” in Monitor on Psychology

Slovic explored this tendency in the context of risk by asking participants to judge the frequency of various causes of death. He and his team presented subjects with pairs, such as tornados versus asthma, or strokes versus accidents, asking which was the more common killer. Subjects were easily fooled by the ease with which they could recall media coverage. (In case you’re wondering, at the time of the study asthma caused 20 times more deaths than tornadoes, and strokes were responsible for nearly twice as many deaths as all accidents combined.)

What we fail to take into account when we let this mental shortcut operate unchecked is that it is the unusual, the aberration—not the mundane—that is covered in news reports or called to memory most easily from our own experience. Even if we can recall several instances of similar events, remember that other pitfall: our own experience is not necessarily representative of a universal truth. By the same token, just because we can’t remember an example of a particular type of event doesn’t mean it hasn’t happened or that it can’t happen in the future—although System 1 is happy to lull us into complacency. The fact that 50 percent of Americans believe global warming will impact them “very little if at all” can be attributed at least partially to the workings of the availability heuristic.


On Second Thought

It should be clear by now that when we hear ourselves say “I went with my gut,” we’re putting at least one of System 1’s shortcuts to use—and possibly more than one. It’s not that our gut isn’t capable of steering us right sometimes. It’s that it can be wrong as often as it’s right. So that phrase should always prompt us to go back and do a recheck, just in case.

Even when we revisit our decisions, knowing the potential pitfalls, we’re still capable of falling prey to shortcuts or biases. It can be easy to convince ourselves that we’ve effectively zapped all of our mental malware even while a powerful trojan operates in the background.

One of the most potent of these is our blind confidence in our own intelligence and reasoning skills. But confidence is also a feeling, Kahneman points out, “one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence.” If the conclusion of the story just “makes sense” to us, we have no pangs in ignoring a few inconvenient but factual details. In fact, the fewer the details, the easier it is for us to construct a coherent story, says Kahneman.

“Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” - Daniel Kahneman, Thinking, Fast and Slow

Aiding and abetting this trojan is the fact that we each are emotionally dependent on preexisting assumptions—our own little idols of the mind, if you will—that drive the way we construct our stories. We may think we’re reasoning from an objective position, but as soon as overconfidence in the quality of our thinking takes hold, we’re less motivated to use reason at all. Our attention becomes drawn to any evidence that seems to justify our confidence, while we ignore what doesn’t. Rationalization has then taken the place of reasoning.

Perhaps one of the more dramatic examples of this can be seen in the political arena, where so many of the risks facing humanity are publicly discussed. Political theorists once thought that polarized partisanship would ensure “intelligent partisan decisions,” because people would be able to logically determine the best of two opposite courses. In much the same way, economists once believed the economy was held together by people who make rational choices that are always in their best interest. However, just as Kahneman revealed “the rational economist” to be an imaginary figure, so other researchers are finding “the rational public” to be a phantasm in the political arena. All of the mental shortcuts people use in economics are rife in politics too, and no wonder: the humans operating in both realms are the same species, subject to the same errors in judgment.

We know the truth of this. We see it on a daily basis, in others at least. It’s easy to spot cognitive errors in everyone else; it’s our own we’re often blind to. And this may be the biggest obstacle preventing us, collectively, from addressing the pressing global problems that threaten human survival. If we could extricate such discussions from the political arena and place them in a moral one, we might stand a chance.

 “For survival, a wise humanity will be guided by beliefs that are both well-evidenced and moral.” -  Julian Cribb, Surviving the 21st Century

Moral principles would presumably be easier to agree on, because they exist above the political fray: Love your neighbor as yourself; if you see someone in need and can help them, do so; be good stewards of the earth. We ignore many moral imperatives to our detriment, but these few would give us a start on a more constructive path.

To hope for a radical transformation of humanity in the near future is idealistic at best. But beginning on a much more local scale—with me, with you—we can make positive individual changes to our thinking and behavior. We might not change the trajectory of human survival on a global scale, but we can have the satisfaction of working hard at correcting our own thinking and behavior, doing what’s right for the right reasons. And what if, just what if, everyone else did the same?

Gina Stepp - http://www.vision.org/self-delusion-and-human-survival-cognitive-bias-7479


Selected References

    Daniel Kahneman, Thinking, Fast and Slow (2011).
    Daniel Kahneman, “Don’t Blink! The Hazards of Confidence,” New York Times (October 19, 2011).
    Daniel Kahneman and Amos Tversky, “Subjective Probability: A Judgment of Representativeness” in Cognitive Psychology (1972).
    Thorsten Pachur, Ralph Hertwig and Florian Steinmann, “How Do People Judge Risks: Availability Heuristic, Affect Heuristic, or Both?” in Journal of Experimental Psychology (May 2012).
    Ellen M. Peters, Burt Burraston and C.K. Mertz, “An Emotion-Based Model of Risk Perception and Stigma Susceptibility” in Risk Analysis (2004).
    Robert Y. Shapiro and Yaeli Bloch-Elkon, “Do the Facts Speak for Themselves? Partisan Disagreement as a Challenge to Democratic Competence” in Critical Review (2008).
    Paul Slovic et al., “Affect, Risk, and Decision Making” in Health Psychology (2005).
    Amos Tversky and Daniel Kahneman, “Judgment Under Uncertainty: Heuristics and Biases” in Science (September 1974).
    Amos Tversky and Daniel Kahneman, “Availability: A Heuristic for Judging Frequency and Probability” in Cognitive Psychology (1973).
    Amos Tversky and Daniel Kahneman, “Belief in the Law of Small Numbers” in Psychological Bulletin (1971).
    Lea Winerman, “A Machine for Jumping to Conclusions,” interview with Daniel Kahneman, Monitor on Psychology (February 2012).