With every new policy to combat the spread of Covid-19, U.K. Prime Minister Boris Johnson has reassured the country that he’s being guided by the science. It’s the science that told him to hold back public measures as the virus got off to a running start, the science that prompted him to shun mass testing and contact tracing, the science that recommended an eventual lockdown and a ramping up of testing. Now the science says that it could be six months before we see anything that looks like the old normal. Science is confusing stuff.
At the forefront of the confusion are those modeling this pandemic and the curve we desperately need to flatten. The language of models, and modelers, is at odds with the demands and communication style of the world of politics and policy. The complexity of models — and their underlying uncertainty — gives rise to misunderstandings and, when leaned on too heavily, to policy mistakes.
It was a modeling change that sent U.K. policy careering toward lockdown. Britain had been leaning toward an approach known as “herd immunity,” whereby a large share of the population becomes infected over time, building up a broad resistance. After researchers at Imperial College reported on March 17 that the impact of a go-slow response to the virus in the U.K. and the U.S. would be devastating, Johnson changed course. And Donald Trump stopped comparing the virus to the winter flu.
A week later, a team from Oxford University, led by Sunetra Gupta, a professor of theoretical epidemiology, and Jose Lourenco, made a splash with the publication of a dramatically different model of the disease’s prevalence. Their paper suggested that, among other possibilities, up to 68% of the population may have already been infected with the virus — many without knowing it. If that’s true, they argue, the threat will have subsided in two to three months, with the health service stretched but not overwhelmed.
Then the Imperial team seemed to pivot again. The lead scientist on the study (who’s also a government adviser), Neil Ferguson, told a parliamentary committee last week that increases in hospital capacity and new restrictions in place made him “reasonably confident” that the NHS can handle the peak of the outbreak. Imperial’s previous paper had warned of 250,000 deaths if the government did not pursue far more draconian measures to suppress the spread of the virus; with new lockdown measures, he said it could be less than 20,000.
Ferguson insisted his estimates of the disease’s lethality hadn’t changed; it could be up to 500,000 with no controls. But his update factored in data showing a greater rate of transmission (or higher reproductive number) than previously thought, which seemed to support the idea that more people have indeed been infected and the National Health Service would be able to cope. Still, it wasn’t entirely clear how the fatality estimates had changed so dramatically.
With competing models from teams of highly respected experts, constant updates and course corrections, it’s no wonder we struggle to make sense of what these models are telling us. It helps to understand what goes into the models and what they do and don’t do.
All mathematical models start with a well-defined question, a framework and a set of assumptions. “Both models are right in their design. But both answer separate questions — and both are only as good as the data they rely on,” says Dr. Jasmina Panovska-Griffiths, a senior research fellow and lecturer in mathematical modeling at University College London, and a lecturer in applied mathematics at The Queen’s College, Oxford University, who specializes in infectious diseases.
The question asked by the Imperial paper is: What strategies will change the epidemic curve of Covid-19 and flatten it? The Oxford paper asks a different question: Has Covid-19 already spread widely?
The Imperial approach, explains Panovska-Griffiths, is a stochastic model (the Greek root word means “able to guess”); by definition, results will vary depending on which probabilistic equations are assumed to best capture reality. For cases in which data are limited, stochastic model predictions can swing wildly as a function of the initial assumptions. Add new data and the result can be seismically different, which is what happened when the team input new data from Italy and China and drastically altered their predictions of the severity of the disease’s impact.
There have been various criticisms of the Imperial model, including the decision to ignore the impact of widespread testing and contact tracing, which has been highly effective in places like South Korea. The underlying mathematical framework was also adapted from a model used for a flu pandemic, which Ferguson published back in 2006 in the journal Nature. That may be fine, though no two pandemics are really equal; and stochastic models with complex data sets are harder to replicate, says Panovska-Griffiths.
Unlike the Imperial team, the Oxford group used a so-called deterministic model, one that starts with a known — in this case, the number of deaths in the first 15 days of non-zero deaths in both Italy and the U.K. — and then makes various assumptions (including about how much of the population is at risk of hospitalization and the time between infection and death) to find levels of infection that fit with that data.
Their model (known as a Susceptible-Infected-Recovered model), which also hasn’t been peer reviewed, suggests the virus may have been spreading a month before we were aware there was a viral enemy in our midst and that we may be approaching broader immunity. Yet, that’s just one of the many scenarios that fits the model (best explained, actually, by Harvard postdoc James Hay). The number of reported deaths could also be explained by a smaller number of infections and a larger proportion who are at risk of hospitalization.
Gupta, one of the lead researchers in the Oxford study, has suggested its main message comes through loud and clear: Only through testing for antibodies can real certainty about rates of infection be obtained. But Cambridge University Professor James Wood, head of the Department of Veterinary Medicine and a researcher in infection dynamics and disease, is among those who worry the model will only muddy the waters. The paper, he said, over-speculates and “is open to gross over-interpretation by others.”
This, really, is the rub: Models require interpretation. The world of scientific modelers looks so neat — pristine sloping lines on two-dimensional axes that tickle our love of pattern recognition and cause-effect. Only, that’s deceptive; it simply masks all the uncertainty. Modelers and scientists of all kinds don’t just accept that uncertainty is infused in everything they do. They also cope with it better than most, naming their assumptions, setting parameters and changing them as more data becomes available.
Those of us outside their bubble want to assuage our uncertainty, even banish it. Are we on the upward slope of a death march or seeing the tip of an iceberg of people who are infected but benign? The answer lies somewhere within a range of possibilities called a confidence interval, but this, of course, is not confidence-inspiring enough for most of us.
This doesn’t mean we dismiss the modelers, with their betas and sigmas. Models helped send man to the moon and much else besides. They’re not just elegant; they play an important role in helping us understand this pandemic and being more prepared for the next one. A robust model should approximate reality reasonably well, but it’s not always clear until enough real data has been gathered. When mistakes can cost lives, it can be difficult to have confidence in a new model for a new disease.
Perhaps the best available basis on which to make actual decisions today isn’t so much what the modelers can tell us about coronavirus, but the experiential evidence coming out of China, South Korea, Italy, Spain and elsewhere. This made clear what we’re supposed to do for a while now: isolate, test, trace, hope. –Bloomberg
Subscribe to our YouTube channel.