AI & Machine Learning: Just How Wrong Are We?
DALL E @ Bing

AI & Machine Learning: Just How Wrong Are We?

In the modern age, when it comes to Artificial Intelligence and Machine Learning, a certain allure of magic seems to envelop our consciousness. Like magicians pulling rabbits from hats, we input data, and these 'magical' algorithms produce an answer. But as the curtains draw back, we're compelled to ask: "How much do we really believe in these answers?" And more critically: "How wrong could we possibly be?"

The Illusion of Right Answers:

To begin, let's step back for a moment. Each time you use a voice-activated assistant, recommendation engine, or even predictive texting, you're witnessing the wonders of AI and ML in action. The sheer convenience and rapidity with which these tools operate have made them integral to our lives. Yet, the real question isn’t about how quickly or efficiently they can give an answer, but rather how much trust we place in those answers.

The Critical Issue: Should We Believe?

"Believe" – it's a powerful word, isn’t it? It's more than just accepting what's presented; it's about placing trust and confidence. So, when your ML model predicts stock prices or a medical diagnosis tool predicts a potential illness, to what extent should you believe in its accuracy?

Trust in the World of Predictions:

Let's delve deeper into the realms of these predictions. Imagine, for a moment, an AI-driven stock market prediction tool. The stock market, by its very nature, is volatile, influenced by an innumerable set of factors from global politics to weather patterns. When such an AI model suggests that a particular stock will rise by 15% in the next month, it’s not just presenting a number. It's offering a vision of the future—one that can impact financial decisions, retirement plans, and more.

Similarly, contemplate a medical diagnosis AI tool. A prediction here can spell the difference between health and illness, relief and anxiety. When this tool suggests the presence of a potential ailment, it’s not merely a suggestion—it's a life-altering revelation.

Navigating the Seas of Uncertainty:

This brings us back to the Gaussian Process Regression model depicted in Figure 1. To the uninitiated, it’s a graph with a solid line surrounded by a shaded area. But to those in the know, it offers a profound insight. The solid line represents the model's prediction—an average of sorts. However, the shaded area, which captures the 95% confidence interval, represents the range within which the actual truth likely lies. It acknowledges that the model, as sophisticated as it may be, cannot be entirely sure. You can see that e.g. close to 10, the variability is very high and you should not believe the "average solid line".

Article content
Gaussian Process Regression model prediction. The shaded area shows any likely value that can be "True" (Within 95% confidence interval) and the solid line


The Dangerous Apathy Towards Uncertainty:

However, herein lies the issue: a vast majority seem to sideline this uncertainty. For many, as long as there's an answer – even if potentially wrong – it's accepted without much scrutiny. This complacency stems from a societal hunger for rapid solutions without delving deep into the potential inaccuracies.

Moreover, on the producers' side, the lure of simplifying tasks often overshadows the need for accurate predictions. Why go the extra mile to compute uncertainty when people seemingly don't care? But here’s the catch: they should, and so should producers.

The Call for Mandatory Uncertainty Measures:

The sheer reliance on AI and ML in critical sectors – healthcare, finance, security, among others – underscores the need to treat these tools with caution. Given their widespread use, it's high time that the disclosure of uncertainty becomes mandatory.

This can be achieved in two main ways:

  1. Legislation: Governments can draft laws mandating that AI and ML algorithms, especially those in critical sectors, must disclose their uncertainty levels. This way, users are fully aware of potential inaccuracies and can make informed decisions.
  2. Quality Standards: Industry bodies can introduce standards akin to ISO certifications, where algorithms that comply with uncertainty disclosure get a quality stamp. This not only ensures transparency but can also serve as a distinguishing factor for users seeking reliable algorithms.

Short Takeaways:

  • AI and ML, as potent as they are, come with their own set of uncertainties.
  • Bayesian Statistics and Gaussian Processes are prime examples of models that factor in this uncertainty.
  • A blind acceptance of answers without considering potential errors is risky.
  • Mandating the disclosure of uncertainty, either through laws or quality standards, can enhance the reliability and trustworthiness of AI and ML tools.

In conclusion, while AI and ML might seem like magic, it's essential to remember that even magicians have secrets behind their tricks. The real magic lies not just in getting an answer but in understanding, interpreting, and gauging how much we should trust it.

Muhammad Ameer

PhD industrial design engineering

1y

Very interesting article Daniel. But I think this fair of uncertainty is much exaggerated. Nothing is absolute especially in predictive models. You are forgetting the importance of boundary conditions. Basically they put the errors in safe zone. So as long as boundary conditions are logical and complete we can trust AI models

To view or add a comment, sign in

More articles by Daniel Wiczew

Insights from the community

Others also viewed

Explore topics