Knowing the Difference*
Do we understand, and can we rely on, decisions algorithms are making?

Knowing the Difference*

'Aggressive and riskier' AI systems have been blamed for large scale AI disasters. In some of the instances, faulty sensor data that led to the problems were identified - the algorithms were doing their jobs, but were using bad input data and making appropriate decisions based on the inputs they were receiving. The AI surprise element, however, was on top of many people's minds and didn't fade quickly.

In November 2019, various incidents in how credit limits were assigned to male versus female applicants raised questions of gender profiling and biases in credit scoring. This led to a high-profile Twitter engagement, with Bloomberg asking 'What's in the black box?', illustrating how it is difficult for users, and sometimes even those offering the service, to explain how decisions are being made.

No alt text provided for this image

A Surprising Move

Not all AI surprises have been negative though. In 2016, AlphaGo, a computer Go program developed by Google Deepmind, played world champion Go player Lee Sodol in a series of widely publicised matches.

In the 37th move of the second game, AlphaGo played a move that caught even the world's best Go players by complete surprise. It was so out of the ordinary, that many thought it was actually a mistake. AlphaGo went on to win the game, and Move 37 has defined the way that AI is now seen by businesses. It is these seemingly hidden opportunities that, when correctly predicted by AI, can change the course of a company. These are the ultimate moves we aim for in using AI and data science.

No alt text provided for this image

Helping to Solve the Problem with AI Governance

The obvious question is then, how do we know to trust Move 37s, while avoiding those which can lead to reputational damage or worse. For me, I think that AI and model building governance has a role to play here.

AI governance, when done correctly, aims to enable data analytics models to be trusted throughout the organisation by ensuring models and the inputs used for models are built and used in accordance with a defined framework. These frameworks should be put in place to mitigate risk and identify potential errors while providing guidance on how to build and deploy models. One such error that should be carefully addressed is faulty input data.

Knowing the Difference

When data scientists combine proper AI governance, data management and governance and a defined analytics framework to build, deploy and manage models, we will be one step closer to being able to tell the difference between outcomes that should ignored, those that should give us guidance, and those that will enable us to make our own Move 37s.

*This article is based on a Fortune magazine article I read recently.

Purwadi Nitimidjojo

Financial Modeller | Financial Modelling Subcontractor | Corporate Trainer

5y
Like
Reply

To view or add a comment, sign in

More articles by Matthew Bernath

Insights from the community

Others also viewed

Explore topics