AI & Machine Learning: Just How Wrong Are We?
In the modern age, when it comes to Artificial Intelligence and Machine Learning, a certain allure of magic seems to envelop our consciousness. Like magicians pulling rabbits from hats, we input data, and these 'magical' algorithms produce an answer. But as the curtains draw back, we're compelled to ask: "How much do we really believe in these answers?" And more critically: "How wrong could we possibly be?"
The Illusion of Right Answers:
To begin, let's step back for a moment. Each time you use a voice-activated assistant, recommendation engine, or even predictive texting, you're witnessing the wonders of AI and ML in action. The sheer convenience and rapidity with which these tools operate have made them integral to our lives. Yet, the real question isn’t about how quickly or efficiently they can give an answer, but rather how much trust we place in those answers.
The Critical Issue: Should We Believe?
"Believe" – it's a powerful word, isn’t it? It's more than just accepting what's presented; it's about placing trust and confidence. So, when your ML model predicts stock prices or a medical diagnosis tool predicts a potential illness, to what extent should you believe in its accuracy?
Trust in the World of Predictions:
Let's delve deeper into the realms of these predictions. Imagine, for a moment, an AI-driven stock market prediction tool. The stock market, by its very nature, is volatile, influenced by an innumerable set of factors from global politics to weather patterns. When such an AI model suggests that a particular stock will rise by 15% in the next month, it’s not just presenting a number. It's offering a vision of the future—one that can impact financial decisions, retirement plans, and more.
Similarly, contemplate a medical diagnosis AI tool. A prediction here can spell the difference between health and illness, relief and anxiety. When this tool suggests the presence of a potential ailment, it’s not merely a suggestion—it's a life-altering revelation.
Navigating the Seas of Uncertainty:
This brings us back to the Gaussian Process Regression model depicted in Figure 1. To the uninitiated, it’s a graph with a solid line surrounded by a shaded area. But to those in the know, it offers a profound insight. The solid line represents the model's prediction—an average of sorts. However, the shaded area, which captures the 95% confidence interval, represents the range within which the actual truth likely lies. It acknowledges that the model, as sophisticated as it may be, cannot be entirely sure. You can see that e.g. close to 10, the variability is very high and you should not believe the "average solid line".
Recommended by LinkedIn
The Dangerous Apathy Towards Uncertainty:
However, herein lies the issue: a vast majority seem to sideline this uncertainty. For many, as long as there's an answer – even if potentially wrong – it's accepted without much scrutiny. This complacency stems from a societal hunger for rapid solutions without delving deep into the potential inaccuracies.
Moreover, on the producers' side, the lure of simplifying tasks often overshadows the need for accurate predictions. Why go the extra mile to compute uncertainty when people seemingly don't care? But here’s the catch: they should, and so should producers.
The Call for Mandatory Uncertainty Measures:
The sheer reliance on AI and ML in critical sectors – healthcare, finance, security, among others – underscores the need to treat these tools with caution. Given their widespread use, it's high time that the disclosure of uncertainty becomes mandatory.
This can be achieved in two main ways:
Short Takeaways:
In conclusion, while AI and ML might seem like magic, it's essential to remember that even magicians have secrets behind their tricks. The real magic lies not just in getting an answer but in understanding, interpreting, and gauging how much we should trust it.
PhD industrial design engineering
1yVery interesting article Daniel. But I think this fair of uncertainty is much exaggerated. Nothing is absolute especially in predictive models. You are forgetting the importance of boundary conditions. Basically they put the errors in safe zone. So as long as boundary conditions are logical and complete we can trust AI models