Have we already forgotten the lessons of COVID-19? We asked an expert in anticipating crises
Bureaucracy, politics and fraying trust in institutions could mean that warnings about the next health crisis fall on deaf ears, says a leading expert.
- 17 February 2026
- 8 min read
- by Priya Joi
At a glance
- An expert in warning research says the world is already forgetting key lessons from COVID-19, as urgency fades and risks become ‘normalised’.
- Early alerts about threats such as bird flu can be diluted by bureaucracy, political pressure and fragmented data systems before action is taken.
- Strong surveillance, clear communication and public trust are essential to ensure warnings translate into timely response.
The next pandemic is not a matter of if, but when. But have we learned the lessons of the COVID-19 pandemic to ensure early warnings of the next health crisis are heeded?
We spoke to Dr Nikki Ikani, Assistant Professor of Intelligence & Security at Leiden University and King's College London, to find out how insights from warning research could help us better prepare for potential epidemics and pandemics.
Q. You’ve said the world is forgetting the lessons from COVID-19. What do you think explains the collective amnesia in terms of how we approach future threats?
I’ve spent a long time studying disasters from pandemics to intelligence failures, industrial catastrophes and geopolitical crises. And one stubborn pattern keeps resurfacing: institutions are good at cataloguing calamities but terrible at heeding the warnings that precede them.
As soon as the crisis passes, the sense of urgency disappears and safety rules relax. What once demanded urgent correction begins to feel less pressing. This a process we call ‘normalisation of risk’. Close calls become routine, and people stop remembering how risky things really are.
On top of that, we rarely learn the right lesson. Sometimes we forget to learn at all, and other times we draw the wrong conclusions.
Early warnings are often real and accurate. But once they enter the bureaucratic chain, they get filtered, softened and bundled with other data.
Take the United Kingdom’s 2016 pandemic drills, Exercise Alice and Exercise Cygnus: the reports bluntly warned that the nation was ill‑prepared for a widespread outbreak, flagging an over-reliance on outdated flu plans and the erosion of existing preparedness knowledge.
Yet, planning assumptions remained heavily shaped by flu scenarios, to the exclusion of other viruses with pandemic potential. Even when a pandemic was identified as a top national risk, expectations were anchored to a flu-like event.
So, when COVID-19 emerged, it wasn’t that no one had imagined a pandemic, but that the imagination had narrowed. That’s often how warning failure works: not through total blindness, but through misplaced confidence in the wrong model.
Q. A US survey suggests that most people aren’t aware that there’s a risk of bird flu right now. Do you think we need to change health messaging about infectious threats?
It’s a difficult one, because part of the challenge is psychological. People spent months confined, anxious, exhausted during COVID-19. People lost loved ones. When it finally eased, there’s a powerful urge to close that chapter. So, pandemic warnings now fall on ears that simply don’t want to hear them.
But messaging, I believe, should also be about audience and responsibility. When we talk about bird flu, some of the drivers, like dense industrial animal farming, aren’t within individual control.
So, messaging can’t just target the public: that would be unfair. I’m not sure what I can change about intensive pig-farming in the Netherlands. It has to reach policymakers, industry leaders, regulators, those who can actually change structural risk.
Delivering an accurate warning is only half the battle. A warning also has to be actionable. That means spelling out the risk, the level of uncertainty, the likely fallout and the concrete choices on the table.
At the same time, once a situation escalates, the public does need clear warnings. Not sensational headlines, not dismissive reassurances, but gradual and clear communication of where we stand in terms of risk levels.
I believe decision-makers shouldn’t underestimate their audiences: we can handle the truth. And perhaps frequent messaging on these risks may help to normalise a broader idea: epidemics and pandemics are not one-off anomalies. They are recurring features of a connected world.
Preparedness shouldn’t create panic, but routine.
Q. From your expertise on warning research, can you explain how an initial flurry of warnings can fade away by getting lost in the chain before a crisis explodes.
What I examine is the space between spotting a problem and deciding what to do about it.
Too often, the first alerts come from people on the frontlines: an engineer who notices a defect on a plane, a physician who sees an odd cluster of cases, a resident who raises a safety concern in the building.
Those early warnings are often real and accurate. But once they enter the bureaucratic chain, they get filtered, softened and bundled with other data. Competing priorities surface, budgets tighten, and sometimes senior officials simply aren’t willing or able to listen amid a flood of other demands.
By the time the alert reaches someone with the power to act, its urgency has dulled, or it’s just one item among many. Warning systems rarely collapse entirely: oftentimes, some parts keep working while others jam. And so, the original signal doesn’t survive the journey intact.
Q. What are some of the common blind spots you see in warnings about imminent crises?
There are several recurring mechanisms, enough that I’m writing a book about them.
Some are psychological. We discount uncomfortable information, or we assume tomorrow will look like yesterday. We downplay low-probability threats, especially when responding would be costly. It’s how we’re wired.
Trust is the bedrock of any warning system, and lately that foundation is cracking.
Some are bureaucratic. Everyone knows institutions move slowly, even if there are urgent risks involved. Responsibilities are fragmented, data aren’t standardised or shared. That really slows down warnings.
And then finally there’s politics. Some warnings challenge powerful interests. Warnings about bio-industry practices, about under-funding health institutions, about data transparency – those are not neutral messages. They carry economic and political implications. That makes them easier to muffle.
And in most cases, failure isn’t one dramatic version of these three. It’s the accumulation of small delays, diluted messages and inconvenient truths pushed aside.
Q. Your background is in intelligence – what lessons could global health experts learn from that sector in terms of threat assessment and how to amplify those signals to those who need to hear it?
Intelligence studies have long focused on the relationship between analysts and decision-makers: the people who produce warnings and the people who must act on them. So, there’s been a lot of thinking about how to present complex, uncertain threats in ways that decision-makers can grasp without oversimplifying them. There’s a lot of overlap there with public health intelligence.
Accuracy matters, of course. But delivering an accurate warning is only half the battle. A warning also has to be actionable.
What’s concerning is that the very notion of “credible evidence” has splintered. One group might trust a scientific study, another might trust a social media post and a third might trust a political leader’s tweet.
That means spelling out the risk, the level of uncertainty, the likely fallout and the concrete choices on the table. It offers options, but more importantly, it helps clarify which ones matter most. It connects analysis to decision. I think this is where there is room for gains in the health sphere.
Of course, it’s hard because health threats are especially difficult. They’re often more invisible and often technical.
A virus evolving in cattle or a statistical anomaly in surveillance data doesn’t feel tangible to a minister or executive. Translating that into something concrete, explaining not just what is happening, but what inaction might cost, is crucial.
Q. At a time when funding to strengthen health systems and their ability to detect and monitor risks is faltering, what effect is this likely to have on the ability to respond adequately should bird flu become a pandemic threat?
Surveillance is the glue that holds an early response together. If we cut back on it, we don’t erase the danger, we just make it take longer to spot.
When money dries up, data moves slower. Slower data means weaker signals, and weak signals rarely get passed on beyond the agencies that collect them. Also, you’re left with an incomplete picture, which makes it harder to decide what action is proportionate.
You lose the ability to prioritise, to distinguish a local flare-up from a wider shift and to target measures where they’ll do the most good. So, the system becomes a thinner safety net because it spots change later, and it is less confident about what it’s seeing.
By the time the warning is unmistakable, you’re often already in catch-up mode.
Q. How much does trust in authorities and governments affect whether threats are taken credibly, and do you think the way COVID-19 was sometimes handled means that people might be less willing to adhere to public health measures in a future pandemic?
Trust is the bedrock of any warning system, and lately that foundation is cracking.
In a healthy democracy, people share a basic sense of what counts as fact, what’s speculation and who gets to speak for science. Over the past few years, that common ground has eroded.
The COVID‑19 pandemic turned what should have been a straightforward public health response into a partisan battlefield, with politicians and pundits arguing over the virus even when solid data were scarce.
That clash gnawed away at confidence in the institutions meant to protect us, and warnings are becoming politicised.
Public health agencies that once were seen as neutral experts now appear as partisan actors, and even accurate alerts struggle to find a receptive audience. The pandemic, and all the debates over whether to wear a mask or not, people outright refusing to wear masks or abide by social distancing: it showed how quickly a technical decision can morph into an identity marker.
It's normal in a democracy for people to disagree. But what’s concerning is that the very notion of “credible evidence” has splintered. One group might trust a scientific study, another might trust a social media post and a third might trust a political leader’s tweet.
Different groups now increasingly live in separate realities, each with its own definition of truth. Warning systems rely not just on data and expertise, but on a shared willingness to treat those signals as meaningful. So, when that shared belief frays, the whole chain of warning becomes fragile, and our ability to respond swiftly to the next crisis weakens dramatically.
