Submitted by JR on
Why Smart People Do Stupid Things ...
How can someone so smart be so stupid? We’ve all asked this question after watching a perfectly intelligent friend or relative pull a boneheaded move.
People buy high and sell low. They believe their horoscope. They figure it can’t happen to them. They bet it all on black because black is due. They supersize their fries and order the diet Coke. They talk on a cellphone while driving. They throw good money after bad. They bet that a financial bubble will never burst.
You’ve done something similarly stupid. So have I. Professor Keith Stanovich should know better, but he’s made stupid mistakes, too.
“I lost $30,000 on a house once,” he laughs. “Probably we overpaid for it. All of the books tell you, ‘Don’t fall in love with one house; fall in love with four houses.’ We violated that rule.” Stanovich is an adjunct professor of human development and applied psychology at the University of Toronto who studies intelligence and rationality. The reason smart people can sometimes be stupid, he says, is that intelligence and rationality are different.
“There is a narrow set of cognitive skills that we track and that we call intelligence. But that’s not the same as intelligent behaviour in the real world,” Stanovich says.
He’s even coined a term to describe the failure to act rationally despite adequate intelligence: “dysrationalia.”
How we define and measure intelligence has been controversial since at least 1904, when Charles Spearman proposed that a “general intelligence factor” underlies all cognitive function. Others argue that intelligence is made up of many different cognitive abilities. Some want to broaden the definition of intelligence to include emotional and social intelligence.
Stanovich believes that the intelligence that IQ tests measure is a meaningful and useful construct. He’s not interested in expanding our definition of intelligence. He’s happy to stick with the cognitive kind. What he argues is that intelligence by itself can’t guarantee rational behaviour.
Earlier this year, Yale University Press published Stanovich’s book What Intelligence Tests Miss: The Psychology of Rational Thought. In it, he proposes a whole range of cognitive abilities and dispositions independent of intelligence that have at least as much to do with whether we think and behave rationally. In other words, you can be intelligent without being rational. And you can be a rational thinker without being especially intelligent.
Time for a pop quiz. Try to solve this problem before reading on. Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?
Yes No Cannot be determined
More than 80 per cent of people answer this question incorrectly. If you concluded that the answer cannot be determined, you’re one of them. (So was I.) The correct answer is, yes, a married person is looking at an unmarried person.
Most of us believe that we need to know if Anne is married to answer the question. But think about all of the possibilities. If Anne is unmarried, then a married person ( Jack) is looking at an unmarried person (Anne). If Anne is married, then a married person (Anne) is looking at an unmarried person (George). Either way, the answer is yes.
To figure this out, most people have the intelligence if you tell them something like “think logically” or “consider all the possibilities.” But unprompted, they won’t bring their full mental faculties to bear on the problem.
And that’s a major source of dysrationalia, Stanovich says. We are all “cognitive misers” who try to avoid thinking too much. This makes sense from an evolutionary point of view. Thinking is time-consuming, resource intensive and sometimes counterproductive. If the problem at hand is avoiding the charging sabre-toothed tiger, you don’t want to spend more than a split second deciding whether to jump into the river or climb a tree.
So we’ve developed a whole set of heuristics and biases to limit the amount of brainpower we bear on a problem. These techniques provide rough and ready answers that are right a lot of the time – but not always.
For instance, in one experiment, a researcher offered subjects a dollar if, in a blind draw, they picked a red jelly bean out of a bowl of mostly white jelly beans. The subjects could choose between two bowls. One bowl contained nine white jelly beans and one red one. The other contained 92 white and eight red ones. Thirty to 40 per cent of the test subjects chose to draw from the larger bowl, even though most understood that an eight per cent chance of winning was worse than a 10 per cent chance. The visual allure of the extra red jelly beans overcame their understanding of the odds.
Or consider this problem. There’s a disease outbreak expected to kill 600 people if no action is taken. There are two treatment options. Option A will save 200 people. Option B gives a one-third probability that 600 people will be saved, and a two-thirds probability that no one will be saved. Most people choose A. It’s better to guarantee that 200 people be saved than to risk everyone dying.
But ask the question this way – Option A means 400 people will die. Option B gives a one-third probability that no one will die and two-thirds probability that 600 will die – and most people choose B. They’ll risk killing everyone on the lesser chance of saving everyone.
The trouble, from a rational standpoint, is that the two scenarios are identical. All that’s different is that the question is restated to emphasize the 400 certain deaths from Option A, rather than the 200 lives saved. This is called the “framing effect.” It shows that how a question is asked dramatically affects the answer, and can even lead to a contradictory answer.
Then there’s the “anchoring effect.” In one experiment, researchers spun a wheel that was rigged to stop at either number 10 or 65. When the wheel stopped, the researchers asked their subjects if the percentage of African countries in the United Nations is higher or lower than that number. Then the researchers asked the subjects to estimate the actual percentage of African countries in the UN. The people who saw the larger number guessed significantly higher than those who saw the lower number. The number “anchored” their answers, even though they thought the number was completely arbitrary and meaningless.
The list goes on. We look for evidence that confirms our beliefs and discount evidence that discredits it (confirm-ation bias). We evaluate situations from our own perspective without considering the other side (“myside” bias). We’re influenced more by a vivid anecdote than by statistics. We are overconfident about how much we know. We think we’re above average. We’re certain that we’re not affected by biases the way others are.
Finally, Stanovich identifies another source of dysrationalia – what he calls “mindware gaps.” Mindware, he says, is made up of learned cognitive rules, strategies and belief systems. It includes our understanding of probabilities and statistics, as well as our willingness to consider alternative hypotheses when trying to solve a problem. Mindware is related to intelligence in that it’s learned. However, some highly intelligent, educated people never acquire the appropriate mindware. People can also suffer from “contaminated mindware,” such as superstition, which leads to irrational decisions.
Stanovich argues that dysrationalities have important real-world consequences. They can affect the financial decisions you make, the government policies you support, the politicians you elect and, in general, your ability to build the life you want. For example, Stanovich and his colleagues found that problem gamblers score lower than most people on a number of rational thinking tests. They make more impulsive decisions, are less likely to consider the future consequences of their actions and are more likely to believe in lucky and unlucky numbers. They also score poorly in understanding probability and statistics. For instance, they’re less likely to understand that when tossing a coin, five heads in a row does not make tails more likely to come up on the next toss. Their dysrationalia likely makes them not just bad gamblers, but problem gamblers – people who keep gambling despite hurting themselves, their family and their livelihood.
From early in his career, Stanovich has followed the pioneering heuristics and biases work of Daniel Kahneman, who won a Nobel Prize in economics, and his colleague Amos Tversky. In 1994, Stanovich began comparing people’s scores on rationality tests with their scores on conventional intelligence tests. What he found is that they don’t have a lot to do with one another. On some tasks, there is almost a complete dissociation between rational thinking and intelligence.
You might, for example, think more rationally than someone much smarter than you. Likewise, a person with dysrationalia is almost as likely to have higher than average intelligence as he or she is to have lower than average intelligence.
To understand where the rationality differences between people come from, Stanovich suggests thinking of the mind as having three parts. First is the “autonomous mind” that engages in problematic cognitive shortcuts. Stanovich calls this “Type 1 processing.” It happens quickly, automatically and without conscious control.
The second part is the algorithmic mind. It engages in Type 2 processing, the slow, laborious, logical thinking that intelligence tests measure.
The third part is the reflective mind. It decides when to make do with the judgments of the autonomous mind, and when to call in the heavy machinery of the algorithmic mind. The reflective mind seems to determine how rational you are. Your algorithmic mind can be ready to fire on all cylinders, but it can’t help you if you never engage it.
When and how your reflective mind springs into action is related to a number of personality traits, including whether you are dogmatic, flexible, open-minded, able to tolerate ambiguity or conscientious.
“The inflexible person, for instance, has trouble assimilating new knowledge,” Stanovich says. “People with a high need for closure shut down at the first adequate solution. Coming to a better solution would require more cognitive effort.”
Fortunately, rational thinking can be taught, and Stanovich thinks the school system should expend more effort on it. Teaching basic statistical and scientific thinking helps. And so does teaching more general thinking strategies. Studies show that a good way to improve critical thinking is to think of the opposite. Once this habit becomes ingrained, it helps you to not only consider alternative hypotheses, but to avoid traps such as anchoring, confirmation and myside bias.
Stanovich argues that psychologists should perhaps develop tests to determine a rationality quotient (RQ) to complement IQ tests. “I’m not necessarily an advocate of pushing tests on everyone,” he says. “But if you are going to test for cognitive function, why restrict testing to just an IQ test, which only measures a restricted domain of cognitive function?”
Kurt Kleiner http://kurtkleiner.com/stories/ut.why.smart.people.do.stupid.things.html