Note: some readers received a couple of hours ago an incomplete post titled “Headlines – 25th of March”. That was a partial draft of tomorrow’s post. I hit by mistake the “publish” button thinking it was the “save draft” one. Apologies.
If you didn’t sign-up to this newsletter yet, you can do it here – it’s free.
In this post, I will analyze 2 metrics which are overrated and I will propose 2 which are underrated.
Overrated metrics
Two metrics which I find overrated (and perhaps misleading) are mortality and the number of cases per million.
Mortality, defined as the number of people who died because of the virus divided by the number of people who got infected by it, is overrated for three reasons.
First, it assumes that both the numerator and the denominator are correct. Respectively, that the cause of death is attributed correctly and that everyone is tested. As we’re learning from newspapers, this is very often not the case.
Second, mortality is not a fixed number. It decreases if we manage to take precautions so that the elder and those with underlying conditions do not get infected, and it increases if we allow our healthcare systems to become overloaded, so that the sick cannot be given the attention they require. It’s a lagging indicator with little predictive power.
Third, even people without the virus might die because of the virus. What happens, for example, to the person who gets a heart attack in a city where the epidemic is widespread, and cannot get medical attention in time because ambulances and doctors are busy attending COVID-19 patients?
Mortality has its uses, but we shouldn’t overly rely on it.
The number of cases per unit of population is misleading because the population size of a country only matters regarding to saturation considerations. Take the example of China and Italy: two vastly different populations (China’s is 22x) and yet they currently have a similar number of official deaths. Why? Because only the population which is within social distance from an outbreak is exposed to it.
Using the number of cases per unit of population might give a false security to large countries, which in the early stage of the pandemic get to dilute their cases and think that the situation is better than it is.
(There is some utility to the metric: to estimate how much help a country can provide to the sick. A country with a low number of cases per unit of population has a huge number of healthy people that can provide to the sick and to the quarantined, whereas a country with a high number might end up in a “everyone for himself” mode.)
What metrics to use, then?
The most underrated metric is number of hotspots.
Consider, for example, two countries, each with 2000 cases, the first one with a single hotspot and the second one with 20 hotspots of 100 cases each.
The first country can easily isolate the single hotspot by ordering a strict lockdown in the affected province. It can then use the healthy provinces to provide food and medical support. It will see a low growth in cases.
The second country, instead, will take more time to lockdown the country, as it will have to close 20 provinces, incurring roughly 20x the financial and social costs. Politicians are reticent to incur such costs and will delay the lockdown as much as they can. For example, Italy immediately quarantined Vò and Lodi areas (when it was thought that the virus was only there), whereas France took much longer, as the virus seemed to be more widespread there.
Moreover, the more the hotspots, the larger the diffusion, all other things equal. For example, if each hotspot has a social circle of 100,000 people – say –, in the second country 2M people are at risk of contagion compared to 100,000 in the first country.
The second underrated metric is number of deaths regardless of cause.
There is evidence, for example, that in Bergamo province (Italy) there has been an astonishingly high number of deaths in March 2020 compared to March 2019 – the number of deaths officially attributed to COVID-19 in the first 23 days of March 2020 are higher than the total number of deaths regardless of cause in the whole of March 2019 (link).
Number of deaths regardless of cause is a useful metric for it allows comparisons between countries – no need to worry about testing policies, attribution of cause of death policies, indirect deaths (due to a overloaded healthcare system). This metric is very solid. Its main source of uncertainty is the small sample and the possibility of a concurring cause of death, such as a bad year of flu or dengue. However, I would argue, it is still more reliable than metrics which assume correct attribution of cause of death and extensive & precise testing.
Conclusion
Due to the reasons mentioned above, neither mortality nor number of cases per unit of population are reliable metrics.
I would love if we used more, instead, number of hotspots and number of deaths regardless of cause.
PS: If you have questions, ask them by replying to this mail or commenting below.
If you didn’t yet, subscribe to this newsletter (it’s free, no spam whatsoever):
Share this post with your friends and colleagues:
Last week, this was an indoor basketball field. Today, it’s part of a hospital in Milan. Picture by @RobertoBurioni.
Hi Luca... I looked at another metric: number of death notices. Corriere della Serra actually publishes them online. And you can see a 30% increase when compared to last year (i checked 1.2. to 21.3. of each year, 2017:527,2018:537, 2019:516 but 2020: 689?)