Continuing this essay series on barriers to scientific progress, this essay explores an interesting piece by M. Anthony Mills in the New Atlantis magazine, in which he outlines three different views on how to measure “scientific flourishing.”
Mills writes:
Funding is of course necessary for science. Modern science is a large-scale and expensive endeavor. But funding is not sufficient for scientific progress. The scientific enterprise suffers from systemic problems that won’t be solved, and could even be made worse, by simply throwing more money at researchers. These problems include the failure to replicate key experimental findings (the so-called “replication crisis”), the prevalence of shoddy research practices, misaligned incentives, increasing bureaucratization, and the slowing of scientific progress. In response to these problems, a number of key science agencies, including the National Science Foundation and the National Institutes of Health, have launched programs to reform funding mechanisms, incentivize breakthrough discoveries, reduce the administrative burden on researchers, and link research to real-world outcomes. These are promising developments for improving federal science as well as the larger research enterprise. But reforms aimed at spurring scientific progress will be shots in the dark without a clear conception of what scientific progress is. Unless we are explicit about what we mean by scientific progress, we will fail to see the tradeoffs between different conceptions of it and the particular kinds of reform they each require. Worse, the wrong kind of reform may even achieve the opposite of what we intend.
Mills goes on to outline three different ways of viewing scientific progress, or the lack thereof:
There are at least three different models for thinking about scientific progress that inform contemporary debates, mostly implicitly. All three are rooted in well-established philosophical traditions, but they differ significantly in how they understand science. The first is what we might call the accumulationist model of scientific progress. According to this model, science progresses through the steady accumulation of data, facts, or information. The guiding metaphor here is the container: scientists go out and find bits of knowledge and add them to the container. Scientific progress is therefore a cumulative process, linear and gradual. Importantly, this process of accumulation is potentially finite. Scientists could in principle find all the bits of knowledge and discover all there is to know about the world. They can fill up the container. At the very least, scientists could, to mix metaphors a bit, pick all the low-hanging fruit — the bits of knowledge that are most easily accessible — leaving only incremental improvements. This view is an important part of the “folk” understanding of science, and it remains influential in policy discussions, despite having been subjected to severe criticism by philosophers, sociologists, and historians over the last century.
The accumulationist model may be contrasted with one that we can call the Kuhnian model, after historian-philosopher Thomas Kuhn, who famously critiqued the accumulationist view. According to this account, progress is not linear and gradual; it is punctuated by moments of profound conceptual change and innovation. There are periods of relative calm — what Kuhn termed “normal” science — during which progress looks a lot like it does to the accumulationist. But these periods are interrupted by crises, when prevailing theories break down. Rivals emerge, challenge the consensus, ultimately overthrow a prevailing paradigm, and take its place, as when relativistic and quantum physics dethroned classical physics. These are the scientific revolutions that Kuhn called “paradigm shifts.” In the Kuhnian model, what counts as a meaningful scientific fact or problem depends on the paradigm. Scientists don’t just go out and collect facts; they use theories to interpret the world and manipulate it. So while a given paradigm might exhaust itself — scientists could “fill up the container” — that doesn’t necessarily mean that scientific progress ends. On the contrary, when a new paradigm emerges, it poses its own new questions, problems, and opportunities. Scientists set aside the old container and get a new one — or, perhaps more accurately, they use the materials from the old container to build a new one entirely.
The contrast between the first two models of scientific progress may be illustrated by a historical anecdote. The famed theoretical physicist Max Planck once recalled that as a university student in the 1870s he asked his teacher about the prospect of a career in physics. The older physicist told Planck that almost all of the major problems had already been solved, leaving only some “specks of dust and bubbles” to test and incorporate. From an accumulationist point of view, the advice was in fact fairly sound. The prevailing classical paradigm in physics was nearly complete in the late nineteenth century — many of the major problems had been solved within it, leaving little of great significance for young physicists to do. What the older physicist didn’t anticipate was that physics was on the cusp of one of the greatest scientific revolutions since Newton, one in which Planck himself would play a decisive role. So what appeared in the classical paradigm as relatively minor problems to be solved through the extension of existing theory would turn out in retrospect to be fundamental problems requiring deep conceptual innovation of just the kind provided by the emerging quantum theory — which Planck initiated in the late 1800s — and by the relativity theories later developed by Einstein. The older physicist can hardly be blamed for not anticipating quantum physics. But from a Kuhnian perspective, he could have anticipated that there would be more theoretical revolutions in the future, even if he could not say exactly what or when. His error was to assume that the end of classical physics was the end of physics as such. Fortunately for us, Planck did not reach the same conclusion.
The third and last model is best understood in contrast to the Kuhnian one. For the Kuhnian model, what propels science forward is problems or crises that are internal to science. Thus, what produced the crisis in classical physics, to which scientists like Planck and Einstein provided solutions, were highly technical problems of interest only to specialists. According to the third model, however, science progresses not by extending existing scientific paradigms, nor by resolving problems or crises internal to science. Instead, science progresses by grappling with problems posed to it from outside by social, political, and economic needs. We recognize scientific progress not by advances or innovations in our theoretical knowledge but by whether and to what extent our theories help us solve practical problems. Does science generate technological breakthroughs, contribute to economic growth, or help us solve pressing social and political problems? We might call this the Baconian model. Science is said to be flourishing only insofar as it bears fruit that can aid in the “relief of man’s estate,” as Francis Bacon famously put it in the seventeenth century. Of course, the first two models of scientific progress don’t reject the idea that science has practical benefits. No one would deny that science contributes to technological innovation and economic growth, or that it helps us solve social and political problems. What is distinctive about the third model is that it takes these contributions to be essential aspects of scientific progress, while the other two consider them byproducts, however important. A science that is “barren of works,” to use Bacon’s metaphor, is immature and sterile, no matter how theoretically sophisticated it may be: “it can talk, but it cannot generate.”
It's interesting to note (or maybe it’s just restating the obvious) that the sorts of race-based preference programs at the heart of the “Diversity, Equity, and Inclusion” (DEI) programs in medical schools today are clear signs of stagnation or retrogression in all three models for viewing scientific progress. Regarding the accumulationist model, based on the steady accumulation of knowledge, DEI programs forsake knowledge in the form of objectively measurable standardized test scores. Regarding the Kuhnian model, based on periods of profound change and innovation, DEI programs explicitly look for guidance in fields other than science, namely social theory and some of its falsely-premised “social justice” formulations, which cannot create any tension with existing scientific models of the world because they are not themselves scientific models of the world. And regarding the Baconian model, based on judging science on its beneficial effects on social, political, and economic needs, DEI programs are based on the false premise that all disparities in outcome between people grouped by race are the result of racism or other structures motivated by the desire to subjugate others. I’ve explored the fundamental flaws of those premises extensively in previous essays, largely compiled here and here, and some of the ways they lead to dramatically worse outcomes for the people they supposedly intend to benefit here.
Mills writes, regarding the three models he outlines regarding the measurement of scientific progress:
[W]hatever their similarities, these models have fundamentally divergent conceptions of scientific flourishing. For this reason, there inevitably will be points of disagreement when it comes to proposals for reform. And these reforms, in turn, will often involve tradeoffs. We will evaluate these tradeoffs differently depending on which conception of progress we endorse. If we follow the first model, our aim should be to ensure the efficient accumulation and diffusion of knowledge, for example by making science transparent, open, and streamlined. Reforms might aim to reduce the administrative burden on researchers. As it stands, researchers on average spend nearly half their time on paperwork rather than actively engaging in research. Streamlining federal rules and regulations or standardizing grant application processes could increase scientific efficiency by freeing scientists up to focus on their core competencies, rather than wasting time on clerical work. Of course, combating the bureaucratization of science could also serve the goal of increasing scientific autonomy, giving scientists more leeway to focus on the work they set out to do. To this extent, the accumulationist model and the Kuhnian model have common cause. But their policy implications diverge in other respects. For instance, it is essential to the Kuhnian model that scientific change is discontinuous, with new paradigms challenging and overthrowing what came before. There is no way to know in advance whether or when a new paradigm will emerge or what it will look like. So for science to flourish, it needs the freedom to resolve its own problems and manage its own crises in its own way. This means policing the boundaries of established knowledge while at the same time making space for the kind of creativity and risk-taking that can lead to breakthrough discoveries. In practice, reforms might aim to give individual scientists or scientific institutions more flexibility in how they spend research funds or what projects they decide to pursue and how. Or reforms could try to ensure that scientific institutions have the right incentives — or are cultivating the right habits of mind — for young researchers to challenge conventional wisdom and propose creative new alternatives. Such autonomy may be worth short-term tradeoffs in efficiency — for example, more dead-end projects, more high-risk projects — if it means more Einsteins in the future.
Clearly, insofar as Mill’s discussion above relates to reducing regulations and giving scientists more autonomy, DEI programs cut in exactly the opposite direction, adding more regulations that restrict the autonomy of scientists and reduce their ability to rely on objective measures of academic performance when doing their work. In that way, DEI also runs directly contrary to the wisdom of microbiologist and chemist Louis Pasteur, credited with developing the germ theory of disease and for inventing the process of pasteurization. Pasteur said during an 1854 lecture at the University of Lille, that “In the fields of observation chance favors only the prepared mind.” He was referring to the many examples in history in which a scientist comes across something unexpected, often by accident. An unprepared mind might dismiss it, because it does not conform to a pre-existing expectation. But a prepared mind will be willing and able to see a new truth hidden in the accident, a new truth informed by new scientific principles as yet undiscovered. A researcher who has studied the phenomenon gathered hundreds of historical examples. He writes:
Consider the following examples, all of which have been referred to as “serendipitous”: A measles outbreak in Indian monkeys caused poliomyelitis vaccine preparation to switch to African monkeys. This led Levine to discover the p53 tumour suppressor gene; Daguerre had spent years trying to coax photographic images out of iodized silver plates. After yet another futile attempt, he stored the plates in a chemicals cabinet overnight to find the fumes from a spilled jar of mercury accidentally produced a perfect image on the plate; Richet, whilst searching for threshold doses of various poisons, discovered that he could induce sensitization to a toxic substance thereby developing understanding of allergies and anaphylaxis. Accepting his Nobel Prize, he said, “It is not at all the result of deep thinking, but of a simple observation, almost accidental”; Elrich discovered Salvarsan, dubbed the first magic bullet, knowing very little about how it worked. It emerged from an extraordinary focus on the idea of chemotherapy (where chemicals might kill pathogens selectively). Salvarsan was the 606th preparation, the 605 before it having each gone through their own set of experiments.
This researcher pinned down some of the mechanisms by which serendipity comes about -- including astute observation and “controlled sloppiness” (a process of carefully recording data that allows unexpected events to occur while still allowing their source to be traced). But those conditions require a well-prepared, properly trained scientist with demonstrated academic abilities. The sorts of DEI programs described in previous essays tend to both put less prepared minds in scientific situations, and also impose more limits on scientist autonomy such that chance events are less likely to arise in the first place and, if they do, researchers will be less prepared to assess their significance. A double whammy. (The need to prepare minds also extends far further. Chance favors the prepared mind throughout life, making youth education so important. When public education so fails lower-income students, for example, it renders them less prepared to see opportunities in life generally, like job prospects or other productive life avenues they might pursue.)
Mills continues:
In stark contrast, if we measure scientific progress by science’s practical outputs, then too much scientific autonomy may be a liability. Our goal should instead be to link science more explicitly to societal needs. Thus, reforms might aim to make science more accountable to outside forces and stakeholders — funders, the government, the market, the public — since it is they who are in touch with society’s most pressing needs. Such accountability is worth the price of scientific autonomy, so long as it produces the practical benefits society demands.
Here, proponents of DEI programs might argue that the relevant outside forces and stakeholders are the political forces that hold DEI values. Indeed, the Biden Administration’s National Science and Technology Council’s report on government science funding explicitly states that, in grant-making decisions, “equity” considerations can trump science. According to the report, “Many policy decisions are ‘science-informed,’ meaning that factors in addition to science shape decision-making. These factors may include financial, budget, institutional, cultural, legal, or equity considerations that may outweigh scientific factors alone.” But of course the authors of those words in the report do not come close to representing the views of a majority of the American public, so it’s hard to see how they could be “in touch with society’s most pressing needs,” and in any case programs of racial preference fail to produce “the practical benefits society demands” insofar as such programs are designed to benefit a narrow, racially defined class of people, with the explicit understanding that doing so involves the rejection of objective measures of academic achievement. Further, as Mills explains:
[S]ince World War Two the public has been underwriting large-scale scientific research and development through vast public expenditures. Implicit in this contract is the idea that these expenditures provide some kind of public benefit. And so it is inevitable, and in a democracy also desirable, that citizens and their elected representatives demand to receive something in return — whether that something is practically beneficial or just adds to our common stock of knowledge … Or consider those sciences that have practical objectives. Biomedical research, for instance, aims not only to advance our understanding but also to improve the practice of medicine and, ultimately, human health. A focus on real-world impacts makes a lot of sense here. If, at the end of the day, there is no obvious correlation between theoretical progress and better health outcomes, the public might reasonably ask whether biomedical research is really worth the vast sums of money it demands.
And as we explored in the last essay in this series, better health outcomes for patients follow from better performance on standardized medical exams by future doctors, whereas DEI programs aim to eliminate such standardized tests. And this lowering of standards by DEI departments comes at a time when America faces increasing competition from China, whose influence in science and technology may soon surpass that of the United States. As Sadanan Dhume writes in the Wall Street Journal:
Though the U.S. has long ruled those realms [of science and technology], China is catching up fast. Washington has begun to address this problem in recent years. But as former Google CEO Eric Schmidt and artificial intelligence expert Yll Bajraktari warned in a Foreign Affairs piece last year, it’s “hard to say with any confidence” that the U.S. is “better positioned or organized for the long-term contest” with China than it was a few years ago. “It is entirely possible to imagine a future where systems designed, built, and based in China dominate world markets, extending Beijing’s sphere of influence and providing it with a military advantage over the United States.” Unlike the former Soviet Union, whose scientific prowess was limited to a handful of domains, China has emerged as a genuine rival to America. The Australian Strategic Policy Institute reported this year that China leads the U.S. in research on 37 of 44 critical technologies, including advanced aircraft engines, electric batteries, machine learning and synthetic biology. In a recent essay in Foreign Affairs, Dan Wang, an expert on China’s technology landscape, wrote that “China now rivals Japan, South Korea, and Taiwan in its mastery of the electronics supply chain.” In 2007, the Chinese added less than 4% of the value-added costs of iPhones made in that country. Now it’s more than 25%.
Interestingly, researchers have found that nepotism in science (another metric that, like racial preferences, is not based on objectively determined scientific merit) declined throughout history during period of rapid scientific advancement, such as the Enlightenment and the Scientific Revolution, finding that:
We have constructed a comprehensive database that traces the publications of father–son pairs in the premodern academic realm and examined the contribution of inherited human capital versus nepotism to occupational persistence. We find that human capital was strongly transmitted from parents to children and that nepotism declined when the misallocation of talent across professions incurred greater social costs. Specifically, nepotism was less common in fields experiencing rapid changes in the knowledge frontier, such as the sciences and within Protestant institutions. Most notably, nepotism sharply declined during the Scientific Revolution and the Enlightenment, when departures from meritocracy arguably became both increasingly inefficient and socially intolerable.
Today, we might hope that the similarly inefficient and socially intolerable program of “equity”-based racial and other preferences in the scientific field would also soon decline, if innovations are the proceed apace in the future.
That concludes this essay series on barriers to scientific progress. In the next essay series, we’ll explore how research on the psychological roots of bureaucracies reveals the hidden motives that self-styled “do-gooder” institutions may not want you to understand.