Risk Assessment and Its Politicization – Part 2
When risk assessment is tainted by political polarization.
In the early eighteenth century, Professor of Physics and Philosophy Daniel Bernoulli wrote that although the facts regarding risk are the same for everyone, each person’s feelings regarding those risks “is dependent on the particular circumstances of the person making the estimate ... There is no reason to assume that ... the risks anticipated by each [individual] must be deemed equal in value.” Bernoulli’s basic insight was that human beings with emotions aren’t designed to be pure statistical calculators when it comes to assessing risk; rather, individual people will bring to bear a whole variety of influences other than facts and logic when choosing how to gauge various risks.
That observation remains especially true today in a social media-driven and politically polarized age, when people’s political orientations can dramatically skew their emphasis on some (smaller) risks over other (higher) risks.
Take school shootings, for example. As Harvard Instructor and risk analyst David Repoik wrote in the Washington Post in 2018:
The chance of a child being shot and killed in a public school is extraordinarily low. Not zero — no risk is. But it’s far lower than many people assume, especially in the glare of heart-wrenching news coverage after an event like Parkland. And it’s far lower than almost any other mortality risk a kid faces, including traveling to and from school, catching a potentially deadly disease while in school or suffering a life-threatening injury playing interscholastic sports.
Yet following a 2022 school shooting in Uvalde, Texas, a local school board member, who works for the American Federation of Teachers union, wrote the following:
I am personally feeling a roller coaster of emotions -- devastation, despair, anger and helplessness. There have been times in the past when I felt numb to the latest tragedy and could power through work, and I fear for the day when I am numb again. This is not normal. It’s not OK.
I’d agree that it’s “not OK.” But it’s not OK because one shouldn’t feel immobilized by what are very low risks. The school system this school board member represents itself sent out resources for parents for discussing violent events they see in the news, including a document from the National Association of School Psychologists that lists as their number one recommendation: “Emphasize that schools are very safe.”
Regarding the risks of dying by being murdered generally, as Vaclav Smil writes in How the World Really Works: The Science Behind How We Got Here and Where We’re Going:
A more insightful metric then is to use the time during which people are affected by a given risk as the common denominator, and do the comparisons in terms of fatalities per person per hour of exposure—that is, the time when an individual is subject, involuntarily or voluntarily, to a specific risk … [E]ven in the violence-prone United States the risk of homicide has recently been just 7 × 10-9 per hour of exposure, half the risk of death attributable to falls (1.4 × 10-8). But, as already noted, the frequency of the latter kind of accidental death is highly skewed, with people over 85 years of age having a risk of 3 × 10-7 compared to just 9 × 10-10 for people 25–34 years old … Such odds are sufficiently low not to preoccupy an average citizen of any affluent country.
Regarding risks generally, Smil writes:
[T]he finality of dying provides a universal, ultimate, and incontestably quantifiable numerator that can be used for comparative risk assessment. The simplest and most obvious way to make some revealing comparisons is to use a standard denominator and to compare annual frequencies of causes of death per 100,000 people. When using the US statistics (the latest published detailed breakdown is for 2017) this leads to some surprising outcomes … Accidental falls kill almost as many people as the dreaded pancreatic cancer with its short post-diagnosis survival (11.2 vs. 13.5). Motor vehicle accidents take twice as many lives (and, moreover, much younger ones) than does diabetes (52.2 vs. 25.7), and accidental poisoning and noxious substances exact a higher death toll than does breast cancer (19.9 vs. 13.1).
The risks of driving in particular are generally much higher than other much-discussed risks (yet rarely register as a counterpoint in the discussion of risks tainted by politics). As Smil writes:
[M]ost people have no problem engaging daily and repeatedly in activities that temporarily increase their risk by significant margins: hundreds of millions of people drive every day (and many apparently like to do so) … Car accidents (with fatalities now in excess of 1.2 million a year) … [S]ocieties tolerate the global toll exceeding 1.2 million deaths a year, something they would never assent to if it were to take the form of recurrent accidents in industrial plants or collapsed structures (bridges, buildings) in or near large cities, even if the combined annual death toll of such disasters was an order of magnitude smaller—“just” in the hundreds of thousands of fatalities … [W]e must start by accurately counting the number of deaths and then deploying necessary assumptions in order to define affected populations and their aggregate time of exposure to a given risk. For driving it is obviously time spent behind the wheel (or as a passenger). For the US we have totals of distances traveled every year by all motor vehicles and by passenger cars (a recent grand total has been about 5.2 trillion kilometers annually) and, after declining for many years, traffic fatalities have gone up slightly to about 40,000 a year … [W]ith 40,000 fatalities this translates exactly to 5 × 10-7 (0.0000005) fatalities per hour of exposure.
Even when presented with a variety of contextual statistics regarding a specific issue, studies show that political partisans (on both sides) selectively ignore some relevant statistics and focus only on others in order to validate their prior political policy preferences. Keith Stanovich, in his book “The Bias That Divides Us: The Science and Politics of Myside Thinking,” describes the following study (related to guns):
The subjects were given a definition of “assault weapon” [and] they were told that a comprehensive bill banning a number of assault weapons had been introduced in Congress, and they were given the following statistics (presented as frequencies …), based on current and historical data [as follows]:
(S = mass shooting; A = assault weapon):
p(S): In the last few years, 6 out of 100 million American adults committed a mass shooting.
p(A): In last few years, 12 million out of 100 million American adults owned an assault weapon.
p(A|S): Out of 6 American adults who committed a mass shooting, 4 owned an assault weapon.
p(S|A): Out of 12 million American adults who owned an assault weapon, 4 committed a mass shooting.
They were then asked which one of these statistics was most important to them personally in deciding whether to support or oppose the assault weapons ban … Clearly, the hit rate, p(A|S), of 4 out of 6 (67 percent) seemed to support the assault weapons ban more than the inverse conditional probability, p(S|A) of 4 out of 12 million (0.000003 percent). Subjects who had supported the assault weapons ban overwhelmingly chose p(A|S) as the most important … The Van Boven and colleagues 2019 experiment provides a particularly good demonstration of how people pick and choose which statistic they view to be most important based on which is most consistent with their prior opinion on the issue at hand.
Incidentally, as Stanovich also writes, large numbers that appear in the presentation of outcomes regarding a particular subject alone can confuse people (regardless of bias) in the assessment of risk. Stanovich describes the following common error in rational thinking:
Subjects’ interpretation of covariation data relevant to an experimental outcome can be distorted by their prior hypotheses about the nature of the relationship (Stanovich and West 1998b). In a typical, purely numeric covariation detection experiment … subjects are shown data from an experiment examining the relation between a treatment and patient response. They might be told, for instance, that:
200 people were given the treatment and improved
75 people were given the treatment and did not improve
50 people were not given the treatment and improved
15 people were not given the treatment and did not improve.
In covariation detection experiments, subjects are asked to indicate whether the treatment was effective. The example presented here represents a difficult problem that many people get wrong, believing that the treatment in this example is effective. Subjects focus, first, on the large number of cases (200) in which improvement followed the treatment and, second, on the fact that, of the people who received treatment, many more showed improvement (200) than showed no improvement (75). Because this probability (200/275 = .727) seems high, subjects are enticed into thinking that the treatment works. This is an error of rational thinking. Such an approach ignores the probability of improvement where treatment was not given. Since this probability is even higher (50/65 = .769), the particular treatment tested in this experiment can be judged to be completely ineffective. The tendency to ignore the outcomes in the no-treatment condition and focus on the large number of people in the treatment-improvement group entices many people into viewing the treatment as effective.
More generally, when parents act on an exaggerated understanding of risks, they tend to model dysfunctional behavior to children. As Greg Lukianoff and Jonathan Haidt write in The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure:
Paranoid parenting and the cult of safetyism teach kids some of the specific cognitive distortions that we discussed [earlier] … We asked [Lenore] Skenazy which of the distortions she encounters most often in her work with parents. “Almost all of them,” she said … “Any upside to free, unsupervised time (joy, independence, problem-solving, resilience) is seen as trivial, compared to the infinite harm the child could suffer without you there. There is nothing positive but safety.” Parents also use negative filtering frequently, Skenazy says. “Parents are saying, ‘Look at all the foods/activities/words/people that could harm our kids!’ rather than ‘I’m so glad we’ve finally overcome diphtheria, polio, and famine!’” She also points out the ways that parents use dichotomous thinking: “If something isn’t 100% safe, it’s dangerous.” Paranoid parenting is a powerful way to teach kids all three of the Great Untruths. We convince children that the world is full of danger; evil lurks in the shadows, on the streets, and in public parks and restrooms. Kids raised in this way are emotionally prepared to embrace the Untruth of Us Versus Them: Life is a battle between good people and evil people—a worldview that makes them fear and suspect strangers.
A week later, the same school board member I mentioned earlier wrote another article titled “Appendicitis? No, My Child Is Scared to Go to School,” in which she wrote:
[T]hree days after the school shooting in Uvalde, Texas, I loaded my 6-year-old son into the car to take him to the ER. I was convinced that he might have appendicitis … As I turned around to look at my son ..., I asked him: “Is there anything going on at school? Are you scared of anything?” He immediately started crying -- and not from his stomach pain … [H]e really broke down and said he doesn't want to go to school anymore because he’s scared the bad guy in Texas will come and murder him and his classmates … I did my best to reassure him that he’s safe and the bad man in Texas can’t hurt him, while still thinking in the back of my head about so much work our country needs to take to protect our kids from gun violence … When I shared this story at work, one colleague told me her middle school son doesn’t want to go to the upcoming school dance and promotion ceremony. Why? Because so many people will be in one place, and they could get shot.
Lukianoff and Haidt also quote Kevin Ashworth, clinical director of the Anxiety Institute in Portland, Oregon, as saying “So many teens have lost the ability to tolerate distress and uncertainty, and a big reason for that is the way we parent them.”
This school board member continues in her article:
As I reread the tips from Turnaround for Children regarding the coronavirus, I realized that the entire tip sheet could have been written as a response to the tragedy in Uvalde … As I read Tip 2, I replaced any reference to coronavirus with “Uvalde,” or “school shooting.” Tip 2 was precisely the tip I recalled on Friday evening and what prompted me to ask my son the important questions.
“Tip 2: Initiate a conversation about the coronavirus [Uvalde]: Don’t wait for your kids to bring the subject of the coronavirus [the school shooting] up to you. Ask what your kids are feeling about the outbreak [school shooting] right now so you can respond to their concerns and their fears truthfully and assure them that you will create ongoing opportunities to talk and connect.”
That approach – to proactively flag to children even the smallest of risks – will likely lead to paranoia. As Lukianoff and Haidt write in The Coddling of the American Mind:
We teach children to monitor themselves for the degree to which they “feel unsafe” and then talk about how unsafe they feel. They may come to believe that feeling “unsafe” (the feeling of being uncomfortable or anxious) is a reliable sign that they are unsafe (the Untruth of Emotional Reasoning: Always trust your feelings). Finally, feeling these emotions is unpleasant; therefore, children may conclude, the feelings are dangerous in and of themselves -- stress will harm them if it doesn’t kill them (the Untruth of Fragility: What doesn’t kill you makes you weaker). If children develop the habit of thinking in these ways when they are young, they are likely to develop corresponding schemas that guide the way they interpret new situations in high school and college. They may see more danger in their environment and more hostile intent in the actions of others. They may be more likely than kids in previous generations to believe that they should flee or avoid anything that could be construed as even a minor threat. They may be more likely to interpret words, books, and ideas in terms of safety versus danger, or good versus evil, rather than using dimensions that would promote learning, such as true versus false, or fascinating versus uninteresting. While it is easy to see how this way of thinking, when brought to a college campus, could lead to requests for safe spaces, trigger warnings, microaggression training, and bias response teams, it is difficult to see how this way of thinking could produce well-educated, bold, and open-minded college graduates.
When both small and large risks, and everything in between, become the subject of concern, then it becomes easy for many to ignore them all. As Aswath Damodaran says (at the 1:37:00 minute mark):
In the mid eighties, [Californians] passed a proposition that, well intentioned, that any product with ingredients that could create cancer had to label that the product could create cancer. But it was written so badly that it covers pretty much everything. That proposition, if I go down the street from where I live, I go into the taco store, there’s a sign that says “Our tacos could cause you cancer.” I go into CVS, coming in through those doors could cause you cancer. And after a while you just stop reading it. To close this, actually, my neighbor has an eighteen year-old son. He was standing outside the other day smoking a cigarette. So I said, “You’re smoking a cigarette? We’ve known for 40 years it causes cancer.” He says, “Everything causes cancer.” And this is exactly where we’re going to end up. If you label everything as cancer-causing, then how do you separate tobacco from tacos? And I think that’s a problem with disclosures: if eveything is disclosed, then I have no idea what matters and what doesn’t, and that’s unfortunately where we’re headed.
And as Bryan Caplan writes:
The harsh reality is that safety lies on a continuum, and no one is ever perfectly safe doing anything. These self-evident truths sound bad -- and when the truth sounds bad, people lie. When the lies become ubiquitous enough, people start to sincerely believe absurdities. Absurdities like, “X is safe; do as much as you like. Y is unsafe; never do Y … Politicians, as usual, weaponize these absurdities. If they want to keep the schools closed, they just declare schools “unsafe.” If they want to open the schools, they just declare the schools “safe.” Either way, they pander to the emotionality of the masses -- and avoid math. Almost nothing sounds worse than math. As the demagogues who rule us are well-aware.
The school board member I mentioned previously concludes her article with these comments:
And tomorrow and this weekend, I will wear orange and stand in solidarity with my friends from Teachers Unify to End Gun Violence.
Now, it’s fine to wear orange to highlight your political risk preferences, but then don’t be surprised when your children pick up on that heightened risk preference and act accordingly.
Still, with all her concern for the very low-risk event of school shootings, the same school board member, who works for an organization that opposes the presence of police officers in schools, just a couple months later, asked no questions after hearing a school safety presentation regarding multiple safety issues in just two quarters in the very schools she’s charged with supervising — a presentation that included reports of 28 assaults, 15 weapon possessions, and 11 incidents of sexual assault or sexual misconduct.
Did this school board member “Initiate a conversation about [school violence at their own schools]” with her own children? After all, the advice she cites states “Don’t wait for your kids to bring the subject of [school violence] up to you.” (I don’t know if she had that conversation or not. But, to my knowledge, she didn’t write any articles about it, as she did regarding a school shooting elsewhere.)
As I mentioned in the previous essay, in 1662, a group of monks observed that the probability of being struck by lightning is tiny but “many people … are excessively terrified when they hear thunder,” but they went on to say “Fear of harm ought to be proportional not merely to the gravity of the harm, but also to the probability of the event.” But as we might hope people would be better judges of risk, Bernoulli reminds us that although the facts regarding risk are the same for everyone, each person’s feelings regarding those risks “is dependent on the particular circumstances of the person making the estimate.” One of those circumstances is a person’s politics, and a person’s politics is often their own way of organizing their ideas into a consistent narrative framed by the parameters of the political party or other organization to which they belong. Consistently applying extreme risk aversion would be wholly debilitating, and so many people apply it only selectively, and in a politically biased way. As Vaclav Smil writes in How the World Really Works: The Science Behind How We Got Here and Where We’re Going, “Fatalistic people … underestimate [some] risks in order to avoid the effort required to analyze them and draw practical conclusions, and because they feel totally unable to cope with them.”
And so fatalistic people, in order to remain at all functional, will have to resort to a means of cherry-picking what risks to emphasize over others, and an easy method of cherry-picking is to reference the political platform of one’s political tribe of choice to do the picking and choosing.
To that point, students at elite universities — who tend to be overwhelmingly one-sided in their political views — express behavior at odds with rational risk assessment, but in line with ideology. As Maxwell Mayer writes in the Stanford Review in an article entitled “Review Analysis: Stanford students are more likely to wear masks on bicycles than helmets”:
Seemingly intelligent and well-rounded people (Stanford students, for example) have adopted bizarre, pointless habits to comport with new expectations about how to “stay safe” -- like wearing masks outdoors -- all while continuing in much more risky behaviors … I decided to attempt a measurement to quantify this phenomenon. On Wednesday, September 22nd, in the 1:00 pm hour, I observed 400 Stanford cyclists on Lasuen Mall, a popular campus street for bicycles. I simply noted whether each cyclist wore a mask, a helmet, neither, or both. Here are the final tallies:
Total cyclists: 400 – (100%)
No mask, no helmet: 195 – (49%)
Mask, no helmet: 134 – (34%)
Helmet, no mask: 42 – (10%)
Mask and helmet: 29 – (7%)
That works out to a masking rate of 41% and helmet-wearing rate of 17%. So, Stanford students are about twice as likely to wear a mask on a bicycle as a helmet … [A]t one of America’s leading research universities, students wear masks on bicycles at a higher rate than they wear helmets.
Perhaps we need more numeracy, and less politics. Back in the day, prominent Founding Fathers urged the careful calculation of probabilities in determining risks. As science historian I. Bernard Cohen writes:
Thomas Jefferson, famous for his insistence on numerical data, “even drew on an argument based on numbers to analyze Shay’s Rebellion [an armed uprising protesting high taxes]. “The late rebellion in Massachusetts,” he wrote, “had given more alarm than I think it should have done.” He calculated that “one rebellion in 13 states in the course of 11 years” is not very great, being the same as “one for each state in a century and a half.”
And beyond even the desire to cherry-pick risks for political purposes is the institutional tendency to crave something, anything, on which to guide their behavior in a complex world where many competing forces are at work. As Peter Bernstein writes in his book Against the Gods: The Remarkable Story of Risk:
One incident that occurred while [Kenneth] Arrow [a future Nobel Prize winner] was forecasting the weather [for the U.S. military] illustrates both uncertainty and the human unwillingness to accept it. Some officers had been assigned the task of forecasting the weather a month ahead, but Arrow and his statisticians found that their long-range forecasts were no better than numbers pulled out of a hat. The forecasters agreed and asked their superiors to be relieved of this duty. The reply was: “The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.”
We’ll continue the discussion of the politicization of risk in the next essay in this series.
I am, all by myself, going to swell your head. But I do not care. Your work is always enlightening, well documented, and leaves me smarter than before I read it. Even with this topic where I actually know quite a bit, the layout and the sequencing was just superb. You have among the few articles I read every one, top to bottom. Thank you.