When I was in college, before the internet, I majored in political philosophy, which focuses on questions like “What is fairness?” and “What is a fair system for distributing things, or for punishing people?” Professors would require the class to read long stretches of sustained arguments by people like John Rawls and Robert Nozick, and to critique those arguments using comparably compelling reasoning. To get other perspectives on the issues discussed, I would go to the library, find the books we were assigned on the shelves, and look at the books around them. Next to the books we were assigned were often dozens of critiques of those books, presented from a variety of perspectives, in their own sustained book form. I would read several of those critiques along with the assigned reading and try to reason through the issues raised as best I could, having been first exposed to many different ways of analyzing the issues.
Would I have had the advantage of pursuing that learning process if I were going to college today? Or would I instead read the assigned text, consult whatever prior convictions I had on the subject, and go straight to the internet to search specifically for arguments that simply reinforced my preexisting views, and use the arguments I found to defend my preexisting views on the subject? And what difference would that have made to my ability to avoid bias in formulating my views on questions of justice and other things going forward?
Keith Stanovich has written an excellent book on people’s tendency to reinforce their own biases when assessing questions of the day – The Bias That Divides Us: The Science and Politics of Myside Thinking -- and that book will serve to frame this series of essays.
Stanovich writes “What our society is really suffering from is myside bias: we evaluate evidence, generate evidence, and test hypotheses in a manner biased toward our own prior beliefs, opinions, and attitudes.” The evidence he presents for this proposition is a compelling call for all of us to reevaluate the origins of whatever preexisting convictions we have, and do our best to make sure that our convictions are the result of careful, contextual reasoning -- not preexisting biases we adopted from social adaptations unrelated to substantive arguments.
Stanovich writes that
myside bias occurs when we search for and interpret evidence in a manner that tends to favor the hypothesis we want to be true … I am calling “myside bias” the bias that occurs when we evaluate evidence, generate evidence, and test hypotheses in a manner favorable toward our prior opinions and attitudes -- where the attitudes in question are convictions (that is, distal beliefs [beliefs unconnected to logic or argument] and worldviews to which we show emotional commitment and ego preoccupation).
While many of us may have a common sense understanding of what Stanovich calls “myside bias,” he points out that “the phenomenon of myside bias has been extensively studied in cognitive science,” and it’s fascinating to see the evidence for the extent to which myside reasoning infects people’s thinking.
Stanovich describes the classic sort of studies designed to determine the extent of myside reasoning, writing:
Dan Kahan, David Hoffman, and colleagues (2012) were able to replicate [earlier findings] with thoroughly modern controls by showing their subjects a film of a protest that had taken place in Cambridge, Massachusetts, in 2009. From the film, it is impossible to tell who the protesters are or what the protest was about -- it occurs outside an unidentified building. All that is really discernible is that there is a clash between the protesters and the police in front of the building. The subjects were told that the protesters had been ordered to disperse by the police and were suing the police for doing so. Unlike the Hastorf and Cantril 1954 study, the Kahan, Hoffman, and colleagues 2012 study contained an experimental manipulation: half of the subjects were told that the demonstrators were protesting against the availability of abortion in a reproductive health care center and the other half were told that the protesters were demonstrating outside a military recruiting center against the military’s then-existing ban on service by openly gay soldiers. Kahan, Hoffman, and colleagues (2012) also used assessments of a variety of multidimensional political attitudes, making it possible to assess whether subjects with conservative social attitudes and those with liberal social attitudes1 assessed the very same protest differently depending on the target of the protest. In fact, the labeling of the protest made an enormous difference in how the two groups of subjects interpreted the clash between the police and the protesters. In the abortion clinic condition, 70 percent of the socially conservative subjects thought that the police had violated the demonstrators’ rights, but only 28 percent of the socially liberal subjects thought that they had. The pattern of responses was completely reversed in the condition that was described to the subjects as a protest against restrictions on gay members serving in the military; only 16 percent of the socially conservative subjects thought that the protesters’ rights were being violated, whereas 76 percent of the socially liberal subjects thought that they were.
Stanovich also describes many other studies showing the same dynamics, including the following:
Jarret Crawford, Sophie Kay, and Kristen Duke (2015) found that when liberal subjects were told that a military general had criticized the president of the United States, they were more likely to excuse the behavior of the general when the president was George W. Bush than when the president was Barack Obama. Conservative subjects, on the other hand, were more likely to excuse the behavior of the general when the president was Obama than when the president was Bush.
Other studies show that myside bias has cascading effects, such that an initial bias in favor of a particular conclusion can be associated with a biased discounting of all sorts of information that would otherwise caution against affirming the preferred conclusion:
Brittany Liu and Peter Ditto (2013) observed a similar self-serving trade-off when judging the morality of various actions, and they found it on both sides of the political spectrum. Subjects were asked about the moral acceptability of four different actions: the forceful interrogation of terrorist suspects, capital punishment, condom promotion in sex education, and embryonic stem cell research (the first two actions being more often seen as morally acceptable by politically conservative subjects than by politically liberal subjects and the second two being more often seen as morally acceptable by politically liberal subjects than by politically conservative ones). They were asked about the morality of each action in itself, whether it was immoral even if effective in fulfilling its intended purpose. The subjects were also asked about the perceived likelihood of beneficial consequences of the action (e.g., whether forceful interrogation produces valid intelligence; whether promoting condom use helps reduce teen pregnancy and sexually transmitted diseases). For each of the four actions, the more strongly subjects believed that the action was immoral even if it had beneficial consequences, the less strongly they believed it would actually have beneficial consequences. Thus the more strongly subjects endorsed the belief that promoting the use of condoms was morally wrong even if it helped prevent pregnancy and sexually transmitted diseases, the less likely they were to believe that condoms actually were effective at preventing these problems. Thus, too, the more strongly subjects endorsed the belief that the forceful interrogation of terrorist suspects was morally wrong even if it yielded valid intelligence, the less likely they were to believe that it actually did yield valid intelligence. In short, the subjects tended to de-emphasize the costs of their moral commitments, just as the subjects in the Finucane and colleagues 2000 study tended to minimize the risks of activities they approved of.
Myside bias is also probably due in part to evolution. As Stanovich writes:
We humans are programmed to try to convince others with arguments, not to use arguments to ferret out the truth. Like Levinson (1995) and the other theorists mentioned earlier, Mercier and Sperber (2011, 2017) see our reasoning abilities as arising from our need not to solve problems in the natural world but to persuade others in the social world. As Daniel Dennett (2017, 220) puts it: “Our skills were honed for taking sides, persuading others in debate, not necessarily getting things right.”
When we add the intellectual laziness involved with myside thinking, our tendency to favor arguments over truth can lead us to what Stanovich calls “islands of false belief”:
Imagine two scientists, A and B, working in domain X. Most of the hypotheses in domain X held by scientist A are true, whereas most of the hypotheses in domain X held by scientist B are false. Imagine that they both then begin to project their prior beliefs onto the same new evidence in the way demonstrated experimentally by Koehler (1993) -- with stronger tendencies to discount the evidence when it contradicts their prior beliefs. It is clear that scientist A -- who already exceeds B in the number of true beliefs -- will increase that advantage as new evidence comes in … The knowledge projection tendency, efficacious in domains where most of a reasoner’s prior beliefs are true, may have the effect of isolating certain individuals on “islands of false beliefs” from which -- because of the knowledge projection tendency -- they are unable to escape … Thus knowledge projection, which in domains where most of the reasoner’s prior beliefs are true, might lead to more rapid acquisition of new true beliefs, may be a trap in a minority of cases where a reasoner, in effect, keeps reaching into a bag of beliefs that are largely false, using these beliefs to structure the reasoner’s evaluation of evidence, and hence more quickly adding incorrect beliefs to the bag for further projection. Knowledge projection from an island of false beliefs might explain the phenomenon of otherwise intelligent people who get caught in a domain-specific web of falsity from which they cannot escape.
In the next essay in this series, we’ll explore some practical reasons people have for indulging biased thinking, and how our tribal nature can make things worse.
Links to all essays in this series: Part 1; Part 2; Part 3; Part 4; Part 5.
Wow, another excellent topic choice. This is an area to which my "differently-opinionated" friends and I give lip service, but it is fascinating to read some underlying study. Will help in providing some substance to the contemplation of how to further consider this all. Once again, thanks from your avid (but sadly, too small) fan club.
Agree w Dr. K above - great topic to explore. I think Anthony Pratkanis' 1995 article "How to Sell a Pseudoscience" dovetails really well with this. https://tinyurl.com/mw7pcwep
I like this, too: “Our skills were honed for taking sides, persuading others in debate, not necessarily getting things right.”
I look at human tendencies as products of evolution.