We are entering in the post-antibiotic era. As you are probably aware, bacteria are adapting to the current strains of antibiotics we have, and we are seeing the emergence of bacteria we can’t kill.
Now, it’s more than a few isolated cases. The CDC just published an anguished report, covered by Vox‘x Sigal Samuel, sounding the alarm. The deaths from antibiotic-resistant bacteria are now 35,000 per year in the US–roughly three times the level of gun homicides, and more than Parkinson’s.
It is hard to overestimate the magnitude of the threat. Most of the life expectancy increase that we have seen since the early 20th century, and the decline in child mortality, is thanks to antibiotics. It is a much clearer and potentially damaging threat than, say, climate change. And yet it remains a very much under-the-radar issue.
What’s the policy takeaway?
The clearest low-hanging fruit seems to be rationing, especially of the antibiotics that are required to make factory farming work. Not to mention have insurers and doctors say things to patients that they might not like. (Cynically, but effectively, the French government has steered people away from excessive use of antibiotics by promoting homeopathy.) But that’s never politically palatable, and furthermore would only buy us time.
What about more funding for research, you may ask? Well, maybe. Vox quotes an expert saying, “For the US, the total cost to fix the broken antibiotics model is $1.5-2 billion per year.” Given that the actual science of fighting antibiotic-resistant bacteria is very much in its infancy–the most optimistic spin you can put on it is that some leads look promising–one may query such a specific figure.
Perhaps the single most important fact of our current era–certainly the most under-discussed relative to its importance–is what Tyler Cowen has called the Great Stagnation: the fact that scientific and technological progress seems to have stalled since some time in the 1970s. In fact, he is out with a new paper today arguing that the rate of scientific progress is indeed slowing down.
The situation seems particularly bad when it comes to medical science. In 2005, a Stanford professor named John Ioannidis, published a paper titled “Why Most Published Research Findings Are False.” It is one of the most cited scientific article of the 2000s–7729 cites as of this writing, according to Google Scholar–and virtually nobody seems to disagree with the basic argument, which is that there is a structural tendency for false positive findings to be reported more than correct negative findings. The article has spawned what is being called an entirely new field of “meta-science.” (Even though there would be no need for such a “new field” if the scientific community, and the educated public at large, had not forgotten the philosophical content and basis for what science is, and “epistemology,” the venerable name for “meta-science,” was still taught.) Indeed, in 2016, the top scientific journal Nature, encouragingly and admirably, published a special report on the problem that’s been called “the reproducibility crisis.” And yet, this reality has yet to penetrate public consciousness in a meaningful way; presumably because the consequences are too fearful to contemplate.
Entrepreneur Will Wilson’s article “Scientific Regress” in First Things lays out the evidence for, well, scientific regress, across the board, but particularly in biology and medicine. The system seems fundamentally broken, and fully explaining it would require an entire article on its own.
Partisans of the new scientism are fond of recounting the “Sokal hoax”—physicist Alan Sokal submitted a paper heavy on jargon but full of false and meaningless statements to the postmodern cultural studies journal Social Text, which accepted and published it without quibble—but are unlikely to mention a similar experiment conducted on reviewers of the prestigious British Medical Journal.The experimenters deliberately modified a paper to include eight different major errors in study design, methodology, data analysis, and interpretation of results, and not a single one of the 221 reviewers who participated caught all of the errors. On average, they caught fewer than two—and, unbelievably, these results held up even in the subset of reviewers who had been specifically warned that they were participating in a study and that there might be something a little odd in the paper that they were reviewing. In all, only 30 percent of reviewers recommended that the intentionally flawed paper be rejected.
If peer review is good at anything, it appears to be keeping unpopular ideas from being published. Consider the finding of another (yes, another) of these replicability studies, this time from a group of cancer researchers. In addition to reaching the now unsurprising conclusion that only a dismal 11 percent of the preclinical cancer research they examined could be validated after the fact, the authors identified another horrifying pattern: The “bad” papers that failed to replicate were, on average, cited far more often than the papers that did! As the authors put it, “some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.”
What they do not mention is that once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm.
Vox faults lack of public funding (of course) and evil drug companies (of course) for the lack of solutions to the antibiotic resistance problem.
But while the academic-industrial complex churns out ever more studies, most of which we must assume are false, drug companies, who actually lose money if their products fail, have “an unspoken rule . . . that half of all academic biomedical research will ultimately prove false.” And indeed drug companies took the lead in pointing out the problems with biological science. (To take just one example, one study by Bayer found that 75% of a sample of cancer research studies published in top journals failed to replicate.)
Increased science funding is one of the few issues that are still bipartisan, and it has constantly been increasing even as the state of science has gotten worse. Who doesn’t love more science? And of course, a giant bureaucratic edifice has built up around that gravy train, with the predictable result of selecting for leaders who are more apt at keeping the gravy flowing than at actually accomplishing what they are supposed to do.
Wilson writes of a nightmare scenario where the scientific mechanisms of peer review and reproduction do work, but only partially:
And even if self-correction does occur and theories move strictly along a lifecycle from less to more accurate, what if the unremitting flood of new, mostly false, results pours in faster? Too fast for the sclerotic, compromised truth-discerning mechanisms of science to operate? The result could be a growing body of true theories completely overwhelmed by an ever-larger thicket of baseless theories, such that the proportion of true scientific beliefs shrinks even while the absolute number of them continues to rise. Borges’s Library of Babel contained every true book that could ever be written, but it was useless because it also contained every false book, and both true and false were lost within an ocean of nonsense.
Paradoxically, increased “funding” might be the worst thing we could do to fight antibiotic resistance.
What can policy do, then?
The first item on the agenda would have to be to recognize reality. Writing undiscerning big checks to the NIH and Harvard is, at this point, more likely to produce garbage than actual science. Is this writer the only person who has noticed that Professor Calculus-like eccentric geniuses seem to have disappeared from public culture? Instead, all the “leading experts” in scientific fields seem to look and talk like lawyers. It should be presumed that the bigger, the more prestigious, and the most lavishly funded the research institution, the more actively harmful to scientific and technological progress it is. Let the reader understand.
Another potential idea might be big-ticket prizes for breakthroughs in antibiotic resistance research and other fields, on the model of the X-Prize Foundation.
But perhaps the most effective idea might be to redirect NIH funding to replicability studies. The work of replicating studies–finding out if a study’s finding actually checks out–is hard, expensive, and rather more unglamorous than trying to find your own breakthrough. With private (and “private”) actors having a direct financial take in the existing system, government could have a role in breaking the logjam.