In these essays we’re exploring Keith Stanovich’s book, The Bias That Divides Us: The Science and Politics of Myside Thinking, in which he discusses “myside bias,” namely our tendency to “evaluate evidence, generate evidence, and test hypotheses in a manner biased toward our own prior beliefs, opinions, and attitudes.”
Stanovich asks us to really think about where our convictions come from. As he writes:
We feel … that [our] beliefs are something we choose to acquire, just like the rest of our possessions. In short, we tend to assume: (1) that we exercised agency in acquiring our beliefs; and (2) that they serve our interests. Under these assumptions, it seems to make sense to have a blanket policy of defending our beliefs.
But then he asks “What if you don’t own your beliefs, but instead, they own you?”
Most of us are familiar with what are called internet “memes,” namely images or ideas that spread quickly across the internet. But the concept of a “meme” as originally conceived is much broader than that. As Stanovich explains:
Cultural replicator theory and the field of memetics have helped us to explore precisely this question [regarding the origins of our own convictions]. The term “cultural replicator” refers to an element of a culture that may be passed on by nongenetic means. An alternative term for a cultural replicator—“meme”—was introduced by Richard Dawkins in his famous 1976 book The Selfish Gene … [B]y its analogy to the term gene, meme invites us to use the insights of universal Darwinism to understand belief acquisition and change … The fundamental insight triggered by the meme concept is that a belief may spread without necessarily being true or helping the person who holds the belief. Consider a chain letter with this message: “If you do not pass this message on to five people, you will experience misfortune.” This is an example of a meme—an idea unit. It is the instruction for a behavior that can be copied and stored in brains. It has been a reasonably successful meme in that it replicates a lot. Yet there are two remarkable things about this successful meme: it is neither true nor helpful to the person carrying it. Yet the meme survives. It survives because of its own self-replicating properties (the essential logic of this meme is that it does nothing more than say, “Copy me—or else”).
As I’ve written previously, the most popular “woke” texts are promoted on social media in similar ways: “Today, social media posts inviting people to read Kendi’s and DiAngelo’s [bestselling] books contain a similar invitation to repost. If the recipient doesn’t forward it along to friends, they risk looking “racist,” or they lose the opportunity to signal their own virtue, penalties or benefits that are entirely independent of facts or data.”
We’ve come a long way from the experience I described in the first essay in this series, in which I would enter the college library, and take out not just the book I was assigned, but also the books that happened to be around it that contained critiques of the main points contained in the assigned book. The much greater information available on the internet today is a wonderful thing, but it also allows the unmindful to simply select from a limitless menu of items drawn solely from the comfort food of their preconceived convictions. As Stanovich writes, “Thus the more evidence there is, of all types—good and bad, on this side and on that—the easier it is to select from it with a myside bias. Both the increasing complexity and the increasing quantity of social exchange that occurs over the Internet make verifying what is actually going on in the world much more difficult.”
Indeed, the largest social media companies today base their entire business models on creating algorithms that direct people away from what might unsettle them, and toward a buffet composed only their favorite comfort foods for thought:
Memes that are easily assimilated and that reinforce previously resident memeplexes are taken on with great ease. Both social and traditional media have exploited this logic, with profound implications. We are now bombarded with information delivered up by algorithms specifically constructed to present congenial memes that are easily assimilated (Lanier 2018; Levy 2020; Pariser 2011). All of the congenial memes we collect then cohere into ideologies that tend to turn simple testable beliefs into convictions.
As historian Neal Ferguson reminds us, “not everything that is published adds to the sum of human knowledge. Much of what came off the printing presses in the sixteenth and seventeenth centuries was distinctly destructive, like the twenty-nine editions of Malleus Maleficarum that appeared between 1487 and 1669, legitimizing the persecution of witches, a pan-European mania that killed between 12,000 and 45,000 people, mostly women.”
That junk food for thought spoon-fed to us by social media companies includes what Stanovich calls “junk memes”:
If a meme can get preserved and passed on without helping its human host, it will do so (think of the chain letter example). Memetic theory leads us to a new type of question: How many of our beliefs are “junk beliefs”—serving to propagate themselves but not serving us? The principles of scientific inference and rational thought serve essentially as meme evaluation devices that help us determine which of our beliefs are true and therefore probably of use to us. Scientific principles such as falsifiability are immensely useful in identifying possible “junk memes”—those which, in replicating, are really not serving our ends but merely serving their own. Think about it. You’ll never find evidence that refutes an unfalsifiable meme. Thus you’ll never have an evidence-based reason to give up such a belief. Yet an unfalsifiable meme really says nothing about the nature of the world (because it admits to no testable predictions) and thus may not be serving our ends by helping us track the world as it is. Such beliefs are quite possibly “junk memes”—unlikely to be shed even though they do little if anything for us who hold them. Memes that have not passed any reflective tests (falsifiability, consistency, and so on) are more likely to be memes that are serving their own interests only—that is, ideas that we believe only because they have properties that allow them to easily acquire us as hosts.
And sure enough, the author of the bestselling book How to Be an Antiracist, Ibram X. Kendi, has said “When I see racial disparities, I see racism.” That statement, in its absoluteness, is unfalsifiable on its face, yet based on an essential falsehood that has become especially popular among younger people, who in virtue of their limited age have less experience with the world and more proclivity toward bowing to the peer pressures around them. Stanovich cites this “disparity fallacy” as a particularly good example of how far afield from data myside bias can take us, writing:
Take, for example, the case of the “disparity fallacy” (Clark and Winegard 2020; Hughes 2018; Sowell 2019), the notion that any difference in an outcome variable viewed as unfavorable to one of the victim groups of identity politics must be due to discrimination. The fallacy is commonly advanced in the general media (indeed, in recent years, the New York Times has seemed to be one of its keenest promoters) and in political discussions. Because, in our current information-rich environment, it is quite easy to find a disparity that makes your group look like a victim, the disparity fallacy has become a major source of myside bias. Universities could help reduce the number of mysided arguments that are fueled by this fallacy. In their psychology, sociology, political science, and economics departments, they have all the tools (regression analysis, causal analysis, detection of confounds) they need to test whether disparities can be explained by variables other than discrimination. But instead of aggressively deploying these tools to curtail the spread of the fallacy, universities all too often become its purveyors. This, of course, is especially true in the proliferating “grievance studies” departments, but it is also true even in legitimate departments, such as psychology and sociology, which should know better.
The disparity fallacy is just one of many memes many younger people acquire early in life. As Stanovich writes:
We also need to be more skeptical of the memes that we acquired in our early lives—those which were passed on by our parents, relatives, and our peers. The longevity of these early acquired memes is likely to be the result of their having avoided consciously selective tests of their usefulness. They were not subjected to selective tests because we acquired them during a time when we lacked the ability to reflect … In some cases … when people project prior beliefs that have been arrived at by properly accommodating previous evidence, then some degree of projecting the prior probability—a local myside bias—onto new evidence is normatively justified. When, however, we lack previous evidence on the issue in question, we should use the principle of indifference and set our prior probability at .50, where it will not influence our evaluation of the new evidence. Instead, what most of us [and, I would say, especially overconfident younger people] tend to do in this situation is assess how the proposition in question relates to some distal belief of ours, such as our ideology, set [our prior probability at more than] .50, and then project this prior probability onto our evaluation of the new evidence. This is how our society ends up with political partisans on both sides of any issue seemingly unable to agree on the facts of that issue … All of the memes that currently exist have, through memetic evolution, displayed the highest fecundity, longevity, and copying fidelity—the defining characteristics of successful replicators. Memetic theory has profound effects on our reasoning about beliefs because it inverts the way we think about them. Social and personality psychologists traditionally tend to ask what it is about particular individuals that leads them to have certain beliefs. The causal model is one where a person determines what beliefs to have. Memetic theory asks instead, “What is it about certain memes that leads them to collect many ’hosts’ for themselves?” Thus the question is not “How do people acquire beliefs?” but “How do beliefs acquire people?” One commonsense view of why belief X spreads is the notion that belief X spreads simply “Because it is true.” This notion, however, has trouble accounting for beliefs that are true but not popular, and for beliefs that are popular but not true. Memetic theory provides us with another reason why beliefs spread: Belief X spreads among people because it is a good replicator—it is good at acquiring hosts. Memetic theory focuses us on the properties of beliefs as replicators rather than the qualities of people acquiring the beliefs. This is the single distinctive function served by the meme concept, and it is a profound one.
When our evolutionary tendency toward myside bias meets memes designed to attach to the myside biases that result from our natural propensities, we have a perfect viral storm in which we see “myside bias as a strategic mechanism that makes belief change difficult in order to preserve existing memes and to realize that we live in a ‘memosphere’ in which there is widespread hostility to examining beliefs.” As Stanovich writes:
Educational theorists in critical thinking have bemoaned for decades the difficulty of inculcating the critical thinking skills of detachment, neutral evaluation of belief, perspective switching, decontextualizing, and skepticism toward current opinions. Critical thinking studies are virtually unanimous in showing how hard it is for people to examine evidence from standpoints not guaranteed to reinforce their existing beliefs. In short, the memes that currently reside in our brains seem singularly unenthusiastic about sharing precious brain space with other memes that might want to take up residence and potentially displace them. That most of us share the trait of hostility to new memes does prompt some troubling thoughts. If most of our memes are serving us well, why wouldn’t they want to submit themselves to selective tests that competitor memes, especially those which contradict them, would surely fail?
We don’t tend to submit the memes that reinforce our prior convictions to scrutiny because to do so goes against the natural pull of myside thinking. But as Stanovich writes:
Treating distal beliefs as possessions [that are used to show status via virtue signaling, for example] encourages the worst type of myside bias -- projecting prior beliefs that are not based on evidence, but instead are extrapolations from untestable convictions. To avoid myside bias, we need to distance ourselves from our convictions, and, to do so, it may help to think of our beliefs as memes that may well have interests of their own [and not necessarily share ours].
The key here is to recognize the distinct possibility that our convictions are not necessarily, or even often, arrived at through a pure process of abstract reflection. As Stanovich writes, there is “a much more important and common situation: where a meme is not acquired in a reflective manner, yet is still functional in one way or another … This is the cultural parallel to my argument that if a belief feels right to us or if it seems to be a functional tool in the achievement of our ends, it is a mistake to think that we must have consciously adopted the belief through the use of reflection and rational thought.”
If our convictions may not come from reflective thought, where do they come from? Stanovich cites the work of Jonathan Haidt:
Jonathan Haidt (2012, 26) argues: “If morality doesn’t come primarily from reasoning, then that leaves some combination of innateness and social learning as the most likely candidates .. I’ll try to explain how morality can be innate (as a set of evolved intuitions) and learned (as children learn to apply those intuitions within a particular culture).” The model that Haidt (2012) invokes to explain the development of morality is easily applied to the case of myside bias. Myside-causing convictions often come from political ideologies: sets of beliefs about the proper order of society and how it can be achieved. Increasingly, theorists are modeling the development of political ideologies using the same model of innate propensities and social learning that Haidt (2012) applied to the development of morality (see Van Bavel and Pereira 2018).
Childhood environments and institutions play a big role:
Values and worldviews develop throughout early childhood, and the beliefs to which we as children are exposed are significantly controlled by parents, neighbors, and friends, and by institutions like schools (Harris 1995; Iyengar, Konitzer, and Tedin 2018; Jennings, Stoker, and Bowers 2009). Some of the memes to which a child is exposed are quickly acquired because they match the innate propensities already discussed. Others are acquired, perhaps more slowly, whether or not they match innate propensities, because they are repeated by cherished relatives and valued friends. They are often beliefs held by groups that the child values. That is, although the “side” in the term “myside bias” is indeed the side “my” conviction is on, that conviction often has more to do with group belonging than it does with personal reflection … That our ideological beliefs are largely acquired unreflectively is consistent with research showing that it is difficult to correct political misinformation by providing correct information … The general idea that distal beliefs are unreflective has been articulated by a wide range of scholars, from historians (“Most of our views are shaped by communal groupthink rather than individual rationality, and we hold onto these views due to group loyalty,” Harari 2018, 223) to cognitive scientists (“Reasoning is generally motivated in the service of transmitting beliefs acquired from citizens’ communities of belief ..,” Sloman and Rabb (2019, 11). Despite the ubiquity of this view in a variety of behavioral disciplines, the layperson’s notion of where convictions come from is still quite different. Most of us still prefer to think that we have thought our way to our deepest convictions.
How to start to disabuse ourselves of the conceit that our convictions are spontaneously generated in a purely rational process? As I’ve written previously, we should apply the Socratic method more often in our own thinking, and in how thinking is taught in schools. Beyond that, and even more simply (if perhaps more difficult in execution) Stanovich writes:
it would help us to at least mitigate if not escape this commons dilemma by weakening our convictions just a little bit—making it at least a little less likely that we will use a conviction rather than an evidence-based, testable belief to formulate a prior probability … Thus, if each of us could become a bit more skeptical of our convictions (distal beliefs)—to avoid turning them into possessions—it might also help us to avoid projecting our convictions inappropriately … [W]e need to talk in a more depersonalized manner about our beliefs.
But just at the time we might realize that “depersonalizing” our beliefs is necessary to help us all arrive at more universal truths, an essentially personalizing intellectual movement has come to dominate college campuses, one that Stanovich calls “common enemy identity politics.” That will be the subject of the next essay in this series.
Links to all essays in this series: Part 1; Part 2; Part 3; Part 4; Part 5.