Trialing new use cases while ensuring robust governance are key to successful AI adoption in banking

Trialing new use cases while ensuring robust governance are key to successful AI adoption in banking

As banks continue to pursue digital transformation and seek to leverage emerging technologies, the rapid evolution of artificial intelligence (AI) solutions presents a significant opportunity to increase operational efficiency and drive growth. However, the reality in Hong Kong is that the development and adoption of AI in banking is still in its infancy. While we believe that banks will shift gears in the coming year and seek to implement AI-powered solutions more widely across their organisations, they also need to ensure that appropriate governance and controls are put in place to manage the evolving risks these new technologies can bring.

Limited implementation of AI

In Hong Kong, we have observed that many AI use cases for banks continue to focus on more simple marketing or automation tasks, rather than on more significant judgment-based functions. However, in order to truly harness the power of AI, banks need to build and deploy solutions across the entire organisation that can help improve operations, remove pain points, drive revenue, strengthen risk management and enhance customer experience. To this end, we believe that banks need to experiment more with proofs of concept (POCs) and trial new technologies to solve problems and improve parts of their business, while keeping a broader platform mindset for the ultimate solution. We are already seeing use cases in the form of AI-powered natural language processing technology to improve regulatory compliance in areas such as sales conduct, sales suitability, market surveillance, order taking, pricing and disclosure.

In the future, we expect talent within banks to be structured differently, with innovation teams, centres of excellence and pockets of people with data science capabilities scattered across the organisation and working on small POCs. Banks are also expected to increasingly work with start-ups, fintech firms and other third parties that offer AI solutions and can help banks build AI models. The proliferation of available AI technologies and service providers means that banks have a fast-growing number of avenues to develop their capabilities and a greater selection of potential use cases, but with that comes an increase in third party risk management to govern the new types of technology.

The importance of AI controls and governance frameworks

While this fast-growing ecosystem is undoubtedly a positive development, it also underscores the importance of having effective AI controls and governance frameworks in place to ensure that associated risks are properly managed. For example, banks need to be able to explain AI-powered decisions to all relevant parties, and show how this was understood, managed and tested at each stage.

The challenge for banks in Hong Kong is that the development of AI applications is arguably ahead of the governance needed to monitor and control them. Regulators have yet to issue a comprehensive set of specific rules, although there have been some developments in recent months. Last year, the Hong Kong Monetary Authority (HKMA) and the Monetary Authority of Singapore (MAS) issued closely related circulars that, in broad strokes, set up guiding principles on the use of big data analytics and artificial intelligence, and principles on the use of AI in the banking industry. Building on these principles, banks are taking a fresh look at how they are approaching this evolving topic. While some banks have established professional teams to take care of the governance oversight for AI, it is not nearly as mature as the applications themselves.

To this end, KPMG’s ‘AI in Control’ framework, which is supported by a set of methods, tools and assessments, can help banks generate value from AI technologies while addressing their inherent challenges, such as integrity, explainability, fairness and resilience. The framework enables banks to develop a responsible AI programme, and build and evaluate sound AI models to help drive better adoption, confidence and compliance. The key to managing AI and its associated risks is for banks to have a comprehensive understanding of who developed the algorithm, the value the technology currently delivers and how it fits into the overall business strategy. Indeed, by addressing key inherent risks associated with AI, this should help foster transparency and confidence in AI, and serve as a foundation for innovation and new use cases.

Another key consideration for the effective application of AI in banking is the overall quality of data, as well as having a thorough understanding and control over this data. An algorithm may have been written the right way, but if inconsistent data is used to train it, the results are going to be unpredictable. Furthermore, if the banks are not aware of the quality or integrity of the data, they are unlikely to recognise discrepancies between outcomes and see the effects of unintentional biases.

A focus on trust and education

While the need to trial new technologies and ensuring effective governance and controls are key, banks also need to ensure that they create the right environment in order for the application of AI in banking to really take off. Banks need to consider whether they have the right infrastructure and storage, buy-in from senior management, and proper education and trust – not just among internal staff and stakeholders, but also from customers – in the AI technologies that are being used. The banks that can put all of these pieces of the puzzle in place will be best positioned to see their AI solutions really thrive, and realise real tangible benefits across their organisation.

If you would like to know more about our perspective on Hong Kong’s banking sector in 2020, please read the full report here.

This article was co-authored with Bradley Scheepers, Associate Director, Data & Analytics (Technology Advisory), KPMG China

To view or add a comment, sign in

More articles by James OCallaghan

Insights from the community

Others also viewed

Explore topics