The Self-Deceptive Psychology of People and Institutions – Part 2
We are our own press secretaries.
Continuing this essay series on Robin Hanson’s book The Elephant in the Brain: Hidden Motives in Everyday Life, this essay explores how, deep down, we are all our own press secretaries.
As Hanson writes, there are several types of deceivers – including what he calls the Madman and the Loyalist — each with their own particular motives geared toward different situations:
“I’m doing this no matter what,” says the Madman, “so stay outta my way!” … Rick Lahaye explains how athletes suffer when they don’t play the Madman: Athletes use small cues of tiredness from close competitors to give themselves a boost and keep pushing forward during a race (e.g., a marathon runner thinking, “Do you see him breathe? He’s almost done. Just keep pushing for one more bit and you will beat him.”). Because of this, athletes conceal (negative) information about [themselves] to competitors. If you show any “signs of weakness,” the opponent will see a chance for success and will be more willing to keep spending energy … “Sure, I’ll go along with your beliefs,” says the Loyalist, thereby demonstrating commitment and hoping to earn trust in return. In many ways, belief is a political act. This is why we’re typically keen to believe a friend’s version of a story—about a breakup, say, or a dispute at work—even when we know there’s another side of the story that may be equally compelling. It’s also why blind faith is an important virtue for religious groups, and to a lesser extent social, professional, and political groups. When a group’s fundamental tenets are at stake, those who demonstrate the most steadfast commitment—who continue to chant the loudest or clench their eyes the tightest in the face of conflicting evidence—earn the most trust from their fellow group members. The employee who drinks the company Kool-Aid, however epistemically noxious, will tend to win favor from colleagues, especially in management, and move faster up the chain. There’s a famous Chinese parable illustrating the Loyalist function of our beliefs: Zhao Gao was a powerful man hungry for more power. One day he brought a deer to a meeting with the emperor and many top officials, calling the deer a “great horse.” The emperor, who regarded Zhao Gao as a teacher and therefore trusted him completely, agreed that it was a horse—and many officials agreed as well. Others, however, remained silent or objected. This was how Zhao Gao flushed out his enemies. Soon after, he murdered all the officials who refused to call the deer a horse. Zhao Gao’s ploy wouldn’t have worked if he had called the deer a deer. The truth is a poor litmus test of loyalty … “I have no idea what you’re talking about,” the Cheater says in response to an accusation. “My motives were pure.” Learning about a transgression sometimes invokes a moral or legal duty to do something about it. If we see a friend shoplift, we become complicit in the crime. This is why we might turn a blind eye or strive to retain plausible deniability—so that, when questioned later, we’ll have nothing to hide … Again, in all of these cases, self-deception works because other people are attempting to read our minds and react based on what they find (or what they think they find). In deceiving ourselves, then, we’re often acting to deceive and manipulate others. We might be hoping to intimidate them (like the Madman), earn their trust (like the Loyalist) … or throw them off our trail (like the Cheater).
However, while we often act the madman, the loyalist, and the cheater, we don’t want others to view us that way. As Hanson explains:
As psychologists Douglas Kenrick and Vladas Griskevicius put it, “Although we’re aware of some of the surface motives for our actions, the deep-seated evolutionary motives often remain inaccessible, buried behind the scenes in the subconscious workings of our brains’ ancient mechanisms.” Thus the very architecture of our brains makes it possible for us to behave hypocritically—to believe one set of things while acting on another. We can know and remain ignorant, as long as it’s in separate parts of the brain. Self-discretion is perhaps the most important and subtle mind game that we play with ourselves in the service of manipulating others. This is our mental habit of giving less psychological prominence to potentially damaging information. It differs from the most blatant forms of self-deception, in which we actively lie to ourselves (and believe our own lies). It also differs from strategic ignorance, in which we try our best not to learn potentially dangerous information. Information is sensitive in part because it can threaten our self-image and therefore our social image. So the rest of the brain conspires—whispers—to keep such information from becoming too prominent, especially in consciousness. When we spend more time and attention dwelling on positive, self-flattering information, and less time and attention dwelling on shameful information, that’s self-discretion … Now think about the time you mistreated your significant other, or when you were caught stealing as a child, or when you botched a big presentation at work. Feel the pang of shame? That’s your brain telling you not to dwell on that particular information. Flinch away, hide from it, pretend it’s not there. Punish those neural pathways, so the information stays as discreet as possible. Of all the things we might be self-deceived about, the most important are our own motives.
Hanson then explains the psychological research on self-deception:
In the 1960s and early 1970s, neuroscientists Roger Sperry and Michael Gazzaniga conducted some of the most profound research in the history of psychology … In order to understand their research, it helps to be familiar with two basic facts about the brain. The first is that each hemisphere processes signals from the opposite side of the body. So the left hemisphere controls the right side of the body (the right arm, leg, hand, and everything else), while the right hemisphere controls the left side of the body. This is also true for signals from the ears—the left hemisphere processes sound from the right ear, and vice versa. With the eyes it’s a bit more complicated, but the upshot is that when a patient is looking straight ahead, everything to the right—in the right half of the visual field—is processed by the left hemisphere, and everything to the left is processed by the right hemisphere. The second key fact is that, after a brain is split by a callosotomy, the two hemispheres can no longer share information with each other. In a normal (whole) brain, information flows smoothly back and forth between the hemispheres, but in a split-brain, each hemisphere becomes an island unto itself—almost like two separate people within a single skull. Now, what Sperry and Gazzaniga did, in a variety of different experimental setups, was ask the right hemisphere to do something, but then ask the left hemisphere to explain it. In one setup, they flashed a split-brain patient two different pictures at the same time, one to each hemisphere. The left hemisphere, for example, saw a picture of a chicken while the right hemisphere saw a picture of a snowy field. The researchers then asked the patient to reach out with his left hand and point to a word that best matched the picture he had seen. Since the right hemisphere had seen the picture of the snowy field, the left hand pointed to a shovel—because a shovel goes nicely with snow. No surprises here. But then the researchers asked the patient to explain why he had chosen the shovel. Explanations, and speech generally, are functions of the left hemisphere, and thus the researchers were putting the left hemisphere in an awkward position. The right hemisphere alone had seen the snowy field, and it was the right hemisphere’s unilateral decision to point to the shovel. The left hemisphere, meanwhile, had been left completely out of the loop, but was being asked to justify a decision it took no part in and wasn’t privy to. From the point of view of the left hemisphere, the only legitimate answer would have been, “I don’t know.” But that’s not the answer it gave. Instead, the left hemisphere said it had chosen the shovel because shovels are used for “cleaning out the chicken coop.” In other words, the left hemisphere, lacking a real reason to give, made up a reason on the spot. It pretended that it had acted on its own—that it had chosen the shovel because of the chicken picture. And it delivered this answer casually and matter-of-factly, fully expecting to be believed, because it had no idea it was making up a story. The left hemisphere, says Gazzaniga, “did not offer its suggestion in a guessing vein but rather as a statement of fact.” In another setup, Sperry and Gazzaniga asked a patient—by way of his right hemisphere (left ear)—to stand up and walk toward the door. Once the patient was out of his chair, they then asked him, out loud, what he was doing, which required a response from his left hemisphere. Again this put the left hemisphere in an awkward position. Now, we know why the patient got out of his chair—because the researchers asked him to, via his right hemisphere. The patient’s left hemisphere, however, had no way of knowing this. But instead of saying, “I don’t know why I stood up,” which would have been the only honest answer, it made up a reason and fobbed it off as the truth: “I wanted to go get a Coke.” … What these studies demonstrate is just how effortlessly the brain can rationalize its behavior.
In the end, we are our own press secretaries. As Hanson explains:
Humans rationalize about all sorts of things: beliefs, memories, statements of “fact” about the outside world. But few things seem as easy for us to rationalize as our own motives. When we make up stories about things outside our minds, we open ourselves up to fact-checking. People can argue with us: “Actually, that’s not what happened.” But when we make up stories about our own motives, it’s much harder for others to question us … [W]e have strong incentives to portray our motives in a flattering light, especially when they’re the subject of norm enforcement … When we capitalize “Press Secretary,” we’re referring to the brain module responsible for explaining our actions, typically to third parties. The idea here is that there’s a structural similarity between what the interpreter module does for the brain and what a traditional press secretary does for a president or prime minister. Here’s [Jonathan] Haidt from The Righteous Mind: If you want to see post hoc reasoning [i.e., rationalization] in action, just watch the press secretary of a president or prime minister take questions from reporters. No matter how bad the policy, the secretary will find some way to praise or defend it. Reporters then challenge assertions and bring up contradictory quotes from the politician, or even quotes straight from the press secretary on previous days. Sometimes you’ll hear an awkward pause as the secretary searches for the right words, but what you’ll never hear is: “Hey, that’s a great point! Maybe we should rethink this policy.” … [M]any press secretaries excel at their jobs with remarkably little contact with the president.11 Crucially, however, when talking to the press, they don’t differentiate between answers based on privileged information and answers that are mere educated guesses. They don’t say, “I think this is what the administration is doing.” They speak authoritatively—like the left hemisphere of the split-brain patient who declared, “I wanted to go get a Coke.” In fact, press secretaries actively exploit this ambiguity, hoping their educated guesses will be taken for matters of fact. Their job is to give explanations that are sometimes genuine and sometimes counterfeit, and to make it all but impossible for their audiences to tell the difference. This is what makes the role of press secretary so hazardous—epistemically if not also morally. It’s structured to deliver counterfeit explanations, but also to make those explanations hard to detect, which is as close as you can get without actually lying. Press secretaries and public relations teams exist in the world because they’re incredibly useful to the organizations that employ them … [O]ur brains respond to the same incentives by developing a module analogous to a president’s press secretary. Above all, it’s the job of our brain’s Press Secretary to avoid acknowledging our darker motives—to tiptoe around the elephant in the brain. Just as a president’s press secretary should never acknowledge that the president is pursuing a policy in order to get reelected or to appease his financial backers, our brain’s Press Secretary will be reluctant to admit that we’re doing things for purely personal gain, especially when that gain may come at the expense of others … [T]he conclusion from the past 40 years of social psychology is that the self acts less like an autocrat and more like a press secretary. In many ways, its job—our job—isn’t to make decisions, but simply to defend them. “You are not the king of your brain,” says Steven Kaas. “You are the creepy guy standing next to the king going, ‘A most judicious choice, sire.’“ … In one classic study, researchers sent subjects home with boxes of three “different” laundry detergents, and asked them to evaluate which worked best on delicate clothes. All three detergents were identical, though the subjects had no idea. Crucially, however, the three boxes were different. One was a plain yellow, another blue, and the third was blue with “splashes of yellow.” In their evaluations, subjects expressed concerns about the first two detergents and showed a distinct preference for the third. They said that the detergent in the yellow box was “too strong” and that it ruined their clothes. The detergent in the blue box, meanwhile, left their clothes looking dirty. The detergent in the third box (blue with yellow splashes), however, had a “fine” and “wonderful” effect on their delicate clothes. Here again, as in the split-brain experiments, we (third parties with privileged information) know what’s really going on. The subjects simply preferred the blue-and-yellow box. But because they were asked to evaluate the detergents, and because they thought the detergents were actually different, their Press Secretaries were tricked into making up counterfeit explanations. Analogous studies involving other products, like wine and pantyhose, have found similar results. The experimental deception in all these studies is the same: An identical product is presented as many “different” products in order to measure how suggestible people are to packaging, presentation, brand, and other framing effects. In each case, the Press Secretary makes up reasons it thinks are legitimate: “Oh, this wine is a lot sweeter,” or “These pantyhose are so smooth.” But since the products are identical, we know the reasons must be rationalizations.
In the next essay in this series, we’ll explore deceptive conversations, between institutions and the people they say they serve.