Harnessing AI for Digital Transformation: Key Opportunities and Challenges.

Harnessing AI for Digital Transformation: Key Opportunities and Challenges.

The exploration of AI in the context of successful digital transformation unveils a promising and evolving discourse. The AI conversation has been around for some time but is now experiencing a resurgence, driven primarily by major corporations, reminiscent of the rise of big data in the early 2000s. This resurgence brings with it a wave of optimism about the potential of AI to enhance organisational performance and productivity.

The Role and Impact of AI in Business Operations

In recent years, the business community has engaged in considerable discussion about the potential of AI to enhance their digital transformation initiatives. Miklošík & Evans (2020) note that AI is a rapidly evolving field with the potential to impact digital transformation in various sectors significantly. The generative AI boom is well underway, and a recent study by McKinsey revealed that 'automation integrated with generative AI could accelerate 29.5 per cent of working hours in the US economy' (CIO 2023). Companies like OpenAI, Microsoft, Google, Amazon, and Meta are investing billions in generative AI.

Much has been written about how Gen AI can significantly enhance business operations, from improving customer experiences through chatbots and virtual assistants to boosting employee productivity with streamlined workflows. An extensive range of use cases includes Content Creation, Media Production, Business Applications, Healthcare, Education, Finance, Legal and Compliance. The list goes on and very few sectors will not be affected. Consequently, the rapid advancement of generative AI has sparked significant fears of job losses across various sectors as AI systems become increasingly capable of performing tasks traditionally handled by people. There is a growing concern that these technologies could lead to widespread unemployment. This apprehension is compounded by the speed at which AI is developing, potentially outpacing the ability of the workforce to adapt and reskill.

Addressing Bias, Transparency, and Data Quality in AI

The integration of AI in digital transformation is inescapable and continues to reshape various industry sectors, bringing opportunities and challenges with it. AI is a moving target, and it is challenging for business leaders to stay focused in a constantly advancing area (Chui et al., 2018). Effectively managing data is a significant obstacle to harnessing the value of generative AI. According to a recent McKinsey survey, 70 per cent of top performers reported challenges in integrating data into AI models, citing issues such as data quality, governance processes, and adequate training data.

The consequences of using poor-quality data in generative AI models are severe, leading to poor outcomes, expensive corrections, cyber breaches, and loss of user trust. Traditional methods for ensuring data quality are inadequate, necessitating improved and expanded data sources and advanced tools like knowledge graphs to enhance model accuracy and consistency (Tavakoli et al., 2024).

Critics point out that the current AI systems are not well-equipped to handle the complex decision-making processes required in a specific sector. The limitations of AI in understanding contextual nuances and making ethical decisions pose significant challenges for digital transformation. A primary concern in the use of AI lies in its objective to replicate or emulate intelligence, which remains contested and unclear (Korienek & Uzgalis 2002)—whether AI should aim to encompass all human mental abilities, including intuition and emotions, and whether such an endeavour is feasible or desirable. Kissinger et al. (2021) note that while AI can draw conclusions, make predictions, and make decisions, it does not possess self-awareness; they argue that AI does not possess the ability to reflect on its role in the world and it does not have intention, motivation, morality, or emotion. Notably, there is a gap in understanding how AI can effectively improve human interactions with AI systems, specifically in building human trust (Wickramasinghe et al., 2020).

In addition, societal issues are prominent in the AI discourse, and there is a need for more research on the moral implications of AI in digital transformation, such as data privacy and integrity. A recent investigative report by the New York Times (2024) suggested that in the process of creating generative AI systems, companies such as OpenAI, Microsoft, Google, and Meta were alleged to have changed their own privacy policies and considered flouting copyright law to ingest the trillions of words available on the internet (Navoroli 2024). These systems' opacity, unpredictability, and reliance on large datasets give rise to privacy, data protection, and security concerns (Stahl 2021). There is growing concern that "big tech" is becoming reliant on "synthetic data" or "information generated by AI itself rather than humans to continue to train their systems" (Navoroli 2024); the difficulty in predicting system behaviour and the adaptive nature of AI amplify these issues. Navoroli (2024) adds that "not only are AI systems consuming and replicating bias—AI that is trained on biased data tends to 'hallucinate' or generate incomplete or wholly inaccurate information."

Furthermore, the lack of transparency in machine learning, often described as 'black-box models' due to their inscrutable inner workings, raises questions about accountability and bias, with numerous AI systems perpetuating existing biases (Stahl 2021). The opaque nature of many AI systems begs further research into developing transparent and explainable AI models which stakeholders can understand and trust. For example, a paramount concern in AI deployment is the embedded bias within algorithms, a bias that can potentially perpetuate discrimination and inequality (IBM 2023). As Chui et al. (2018) note, 'Such biases have a tendency to stay embedded because recognising them and taking steps to address them requires a deep mastery of data science techniques as well as a more meta-understanding of existing social forces including data collection. In all, debiasing is proving to be among the most daunting obstacles and certainly the most socially fraught to date.'

Conclusion

Future research should focus on appropriate methodologies to identify, measure, and rectify biases in AI systems. This could involve creating diverse datasets that reflect the range of users and scenarios AI will encounter, ensuring that AI systems are inclusive and equitable (Shams et al., 2023). Moreover, it is crucial to recognise the importance of interdisciplinary research involving ethicists, technologists, and sociologists. This collaboration can delve more deeply into the broader societal implications of these biases, highlighting the need for diverse perspectives in addressing this complex issue.

In addition, Kissinger et al. (2021) explain that, ultimately, society will have three options—confining AI, partnering with it, or deferring to it. At present, there is yet to be a consensus on what our relationship to AI should be as it varies among societies. Humanity must still agree on shared principles, and unity will prove difficult. In the absence of a shared ethic to guide the use of AI, conflicting individual actions will magnify instability (Huttenlocher et al., 2021).

In conclusion, as AI systems become increasingly integral to digital transformation processes, they also become targets for cyber threats. Research should assess potential vulnerabilities that AI systems introduce, particularly within financial and healthcare sectors where data sensitivity is paramount. Such vulnerabilities are potential contributors to the failure of AI-enabled digital transformations. The research could focus on technological solutions and regulatory frameworks that ensure adequate protection against AI misuse and cyber-attacks, as data remains at the core of AI functionality.

 

References

Candelon, F., Martinez, D., Rajagopal, N., and Zhukov, L. (2024) The next evolution of AI is already here—and hiding in plain sight. Fortune.

Chui, M., Manyika, J. and Miremadi, M. (2018) What AI can and can’t do (yet) for your business, McKinsey and Company.

Kissinger, H., Schmidt, E., Huttenlocher, D. P., & Pollins, E. (2021). The age of ai: And our human future. Little, Brown & Company.

Korienek, G. and Uzgalis, W. (2002), Adaptable Robots. Metaphilosophy, 33: 83-97.

Miklosik, A., and Evans, N. (2020). Impact of big data and machine learning on Digital Transformation in Marketing: A Literature Review. IEEE Access, 8, 101284–101292.

Navaroli, A. C. (2024) Op-ed: Ai’s most pressing ethics problem. Columbia Journalism Review.

Shams, R.A., Zowghi, D. and Bano, M. (2023). AI and the quest for diversity and inclusion: a systematic literature review. AI Ethics.

Stahl, B.C. (2021). Ethical Issues of AI. In: Artificial Intelligence for a Better Future. Springer Briefs in Research and Innovation Governance. Springer, Cham.

Tavakoli, A., Giovine, C., Caserta, J., Machado, J., & Rowshankish, K. (2024). A data leader’s Technical Guide to Scaling Gen Ai. McKinsey & Company.

Wickramasinghe, C. S., Marino, D. L., Grandio, J., and Manic, M. (2020). Trustworthy AI development guidelines for human system interaction. 2020 13th International Conference on Human System Interaction (HSI).

 

 

 

To view or add a comment, sign in

More articles by Dennis Tapfuma, EMBA

Insights from the community

Others also viewed

Explore topics