I’ve read many arguing that we got caught blindsided by the virus because the human mind has trouble understanding exponentials.
My grandma would tell you that you don’t need to understand exponentials to know that the virus is dangerous. Humbleness would suffice.
The humbleness of looking at the makeshift hospital beds built in China and at Spain converting an ice rink into a morgue because the crematoriums in Madrid are saturated and realizing that this could be your future would be enough.
The humbleness at realizing that Lombardy is not the third world, and what happened there will happen in other countries to, unless they react faster and more effectively.
It’s easy to know how to best react – it would suffice to look at what worked (in Taiwan, Singapore, South Korea and China) and at what didn’t (in Italy) and to do more of the former and less of the latter.
Unfortunately, some people prefer sounding smart than doing what works, so we have models. Probably, not by inclination, but because that’s what they’ve been valued for.
Political elections and scientific committees made of peers put a darwinian pressure on sounding smart rather than doing what works – the latter being the competence of entrepreneurs and other professions rooted in the real world.
The result: a flock of modern astrologists using mathematical models.
The problem with models
Some models do work. However, there are three possible problems with models, and any model that has at least one of them should not be used under any circumstance (this discussion is inspired by many works of Nassim Nicholas Taleb).
First, the error on results. In some models, small errors in the inputs produce small errors in the outputs. In other ones, small errors in the inputs produce huge errors in the output. Epidemiological models tend to be of the second type.
Let’s take a very simple model: cases grow by 30% each day. This model would predict that an outbreak of 100 cases would grow to 3937 cases in two weeks. However, if the input was wrong and the virus actually grows by 40%, the model would predict 11112 cases in two weeks. A 10% absolute difference in the input creates a huge difference in the output (almost 3x).
Second, the error on parameters. How do we know if cases grow by 20%, 30% or 40%? We estimate it. However, if we have small samples, we will have a high uncertainty on the coefficient, creating even larger uncertainty on the result.
Please note that in this case, it’s not (only) sample size that matters, but how clustered are the samples. A sample of 10000 cases in 5 countries is more reliable than a sample of 100000 cases in China – for the latter might provide information regarding the virus OR it might provide information about China.
Gerd Gigerenzer’s excellent book Gut Feelings shows that the reason humans and animals use simple heuristic is not because they are faster than complex models; it’s because they work better when samples are small or parameters are volatile. In this case a simple heuristic is: viruses spread fast and kill lot of people, better react quickly and, in case of doubt, do what worked for centuries: isolate the sick.
Third, the false sense of security. I don’t care if your model can predict with 95% accuracy how the epidemic will turn out. I care about what will happen in the remaining 5% of cases. What happens outside of the confidence interval? Does double the number of people die? Or do billions of people die?
Moreover – and this is disgracefully never mentioned – what’s the uncertainty of the 95%? In other words, how do we know that the 95% confidence interval isn’t, actually, 50%?
The answer: we don’t. Or, more precisely, we know it when we’re studying phenomena which have been observed for decades and which has no chance of evolving. This is clearly not the case of the novel COVID-19.
Models during the pandemic
It is irresponsible to use models during this pandemic. They are guilty of the three problems mentioned above: small errors in the inputs create big errors in the output, we don’t have enough information about the parameters and we do not even have enough information about what we do not know.
It is much better to man-up, have the responsible humbleness of admitting that no model can predict with enough resistance to error what will happen to justify building policy on top of it, and act like one would in absence of reliable information: doing what proved to work and assuming that, unless it is done, the country will end up like the countries which didn’t do it either.
Practical examples
Taiwan did not attempt to model epidemic propagation nor it waited to collect data to inform its decisions. It began screening travelers from Wuhan as early as the 5th of January, the day that a report of abnormal pneumonia got published on the WHO website.
First, protect; second, think; third, measure. (Why is it necessary to think before measuring? Because otherwise you extrapolate from spurious coincidences.)
Singapore, upon suspecting that someone might have the virus (because he has symptoms or because he met someone who tested positive), first isolates them then tests them and finally waits for the results before eventually raising the isolation order.
The use for models
Models have their function: to understand what happened, why it happened, and to propose incremental changes for future prototypes (to be validated in the real world).
However models should always be downstream of reality, not upstream, and should always be subordinate to risk management. First, protect, then model, later validate and finally optimize. Only the bottom-up works.
We cannot afford basing our reaction on models, not when a mistake would be fatal.
We already know what works; we just need the humility to transform it into action, even if it wasn’t our idea, even if we aren’t convinced it’s the best course of action.
As always, liking an outcome and disliking the actions that would bring us there is a recipe for frustration.
If you didn’t subscribe to this newsletter yet, you can do it here. You will receive two coronavirus updates a day – one with the headlines and one with an in-deep analysis.
