Bridging the Knowledge Gap: How AI Helps—and Fails—to Quantify Black Swan Risks
When it comes to risk management, “Black Swan” events capture our imagination. These are rare, high-impact occurrences that no one sees coming—precisely because our knowledge is incomplete.
Recently, I explored whether we should prepare for Black Swans at all and how Artificial Intelligence (AI) can help quantify risk.
My conclusion: while AI enables faster, more approachable modeling, it can’t magically turn unpredictable events into predictable ones.
In this newsletter, I’ll explain why even the best models fail when knowledge is incomplete, why we shouldn’t give up on quantification altogether, and how AI can help bridge some of our knowledge gaps.
Black Swans: An Issue of Incomplete Knowledge
Let’s start with the concept of a Black Swan, popularized by Nassim Nicholas Taleb. A Black Swan is something that defies our ability to predict because it arises from areas where our understanding is drastically limited.
When we look back at these events—such as catastrophic natural disasters, sudden market collapses, or unprecedented system failures—they might seem explainable in hindsight, but beforehand, our models rarely capture them.
The big question:
If these events come from incomplete knowledge, why even try to model them at all?
The answer is twofold.
First, we need to acknowledge that truly Black Swan events are, by definition, outside the scope of our present models—no matter how advanced (but than again, for some it may be no Black Swan, since it is observer dependent)
Second, risk quantification still matters because it reduces our knowledge gap around the less extreme but still significant high-impact, low-probability (HILP) events.
By better understanding the risks we can see, we free up resources to focus on strengthening overall resilience, making us more robust in the face of unknowns.
The Limitations of Risk Quantification for Extreme Events
One of the key points I made in my LinkedIn post is that when we try to quantify extreme risks (like an extended airport power outage), our standard models can miss crucial details.
The more extreme the event, the less room we have to overlook even a single variable, because that one variable might completely change the outcome.
For example, imagine modeling a rare, once-in-15-years power outage at Heathrow.
If we forget to include a critical factor—say, the effect of overlapping disruptions with air traffic control systems or international security protocols—we may underestimate the true cost or probability.
In a regular scenario, omitting a small factor might not hurt your model’s accuracy too much.
But for extremely low-frequency, high-impact events, that small oversight can balloon into a major blind spot.
The key reason lies in the distinction between fat-tailed (often governing extreme events) and thin-tailed (e.g., Gaussian) distributions.
In a Gaussian world, every additional data point refines the overall picture incrementally, so missing one observation typically does not drastically alter the model.
However, in a fat-tailed environment where extreme events dominate, you might see that 90% of all observations are relatively minor, but a single outlier is extraordinarily large.
If that critical outlier is absent from your dataset, the entire risk profile can be wildly off, because just one massive event can define the tail behavior and lead to a significant underestimation of real-world risk.
This highlights a critical truth:
As the potential impact of an event grows, so does the need for complete, high-quality data—and we’re never guaranteed we have everything we need. That’s why, no matter how sophisticated our tools are, we must remain humble about what our models can and cannot tell us.
How AI Helps—And How It Doesn’t
AI can sift through vast amounts of data, automate manual searches, and even self-correct as it learns from new information. When I researched historic airport outages, I relied on AI tools to:
Recommended by LinkedIn
However, AI doesn’t make risk quantification a better science—at least not for the most extreme and unpredictable threats.
It makes modeling more approachable, allowing us to ask more “what if?” questions and rapidly iterate on different assumptions.
This is a step forward because it encourages more people to engage with risk data and develop a more intuitive grasp of uncertainty.
But we can’t let the promise of AI lull us into believing we now have a crystal ball.
Why Quantify Risks if Models Can Fail?
If we accept that truly extreme events stem from incomplete knowledge, and thus our models can fail, why bother? Here’s why:
Black Swans, by definition, live in the blind spots of our current knowledge.
Risk quantification, even aided by AI, can’t eliminate those blind spots—but it can shrink them.
By leveraging AI’s data-processing power, we make modeling more accessible, faster, and more collaborative. This lets us explore a broader array of “what if” scenarios with relative ease.
Yet we must remain aware that the more extreme the event, the less confident we can be in any single model’s predictions.
AI is a valuable ally, but it’s no magic bullet for incomplete knowledge. Our best strategy is to use AI and risk quantification techniques to learn as much as we can about the risks within our grasp—while staying vigilant and building flexible resilience for the surprises that still lie beyond our horizons.
Thank you for reading!
I hope this deep dive clarifies why AI is both a powerful tool and a reminder of our inevitable limits.
If you have questions or want to explore these ideas further, feel free to reach out or leave a comment. And if you haven’t already, make sure to subscribe for future insights on risk management, AI, and more.
Stay curious, stay prepared, and—above all—stay resilient.
Marco
P.S. If you want to learn how to do all of the above—including risk quantification—even without prior technical knowledge, my course is perfect for you.
You’ll gain all the essential skills to start your journey in under 6 hours.
Join the waitlist for my April cohort!
Chief Investigator at Aviation Safety Investigations
1moAre you mixing concepts? Identification of rare events with their quantification. Obviously if you do not identify it then you are not going to quantify it. Are you absolutely sure that nobody could have identified the event? Was it just that nobody was listening to the one who identified it?
Retired
1moYou still appear to be limiting yourself to analyzing the probability of a risk event to data about that event. There is a lot more information about the probability of an event available from analyzing the mechanism that produces the event.
I help Risk & Resilience Managers build unique knowledge to become a top 1% Resilience Engineer, with innovative but proven Resilience Strategies | Master Risk, Resilience, Antifragility & Complexity
1moOsama Salah I totally agree on the importance of resilience when it comes to extreme events. Regarding the factors, perhaps I wasn’t clear enough in my post. What I meant is that for fat-tailed probability distributions (i.e., rare extreme events), you need to capture all relevant factors—because you can’t know in advance if you captured the most important one (the big impact event) Think of it like measuring wealth: if your sample doesn’t include someone like Elon Musk, your distribution won’t reflect the full range of possibilities. Theoretically you can't be sure until you looked at everything. This is the problem of induction.
Marco Felsberger for extreme events you refer to “critical factors” but for a “regular scenario” you refer to “a small factor”. Is that a fair comparison? I would say just as a matter of definition a “critical factor” is always a critical factor irrelevant of the scenario being “extremely low-frequency, him impact” or not. It’s just some some risk impacts aren’t a big deal and can be dealt with if we get it wrong and some will have a big impact. I believe dealing with black swans is in the realm of resilience engineering and not risk management, and that again because we don’t recognize black swans unless confronted with them. Resilience does exactly that, it helps us “being prepared to be unprepared”.
Structured Solutions Architect at Causal Capital
1moThis is an interesting dissection on Black Swan Events and Risk Modelling. Thanks for sharing, amazing ideas. “These are rare, high-impact occurrences that no one sees coming—precisely because our knowledge is incomplete.” — Black Swan Events aren’t just rare, they are completely novel and they have systemic impacts. AI will need serious prompt engineering to make head or tail of them, but then humans have little hope making sensible assessments on them either. That said, we are in the realm of black swans at the moment: There are enough paradoxical shifts going on, geo-politically, sociologically, economically and environmentally, that the possibility of a black swan event arising in the next decade is high.