The majority of employees are making errors in their work due to AI reliance, often using AI-generated output without verifying its accuracy!

The majority of employees are making errors in their work due to AI reliance, often using AI-generated output without verifying its accuracy!

📈 AI adoption is accelerating, yet trust remains a pressing challenge—underscoring the delicate balance between its transformative benefits and inherent risks.

🚩 A staggering 66% of employees rely on AI-generated output without verifying its accuracy, leading to mistakes in their work for 56% of users.

🦾 While many report experiencing significant advantages from AI integration, 79% express concerns over potential risks and unintended consequences.

🤖 AI is becoming a daily tool for many—38% use it weekly or more, while 28% engage with it semi-regularly, reflecting its growing role in workplace decision-making and productivity.

As AI reshapes the modern workforce, organizations must prioritize responsible implementation, fostering a culture of transparency, validation, and ethical AI usage, according to a new interesting research published by KPMG and University of Melbourne using data 📊 from an online survey of 48,340 people across 47 countries completed between November 2024 and mid-January 2025.


✅ Organizations face risks from employees’ complacent AI use

Article content
Perceived risks and experienced negative outcomes from AI use

Researchers have observed a growing reliance on AI-generated output in the workplace, with 66% of employees using AI-driven information without assessing its accuracy or validity. More than half choose not to disclose when AI assists in their work, often presenting AI-generated content as their own.

While this trend raises concerns at an individual level, its broader societal risks are even more pronounced:

❌ Cybersecurity risk (e.g. from hacking or malware) is a dominant concern raised by 85 percent of people

❌ The loss of human interaction and connection (e.g. losing the option to speak with a human service provider).

❌ Misinformation and disinformation (e.g. AI used to spread misleading or false information and deepfakes),

❌ Manipulation or harmful use,

❌ Loss of privacy or intellectual property (IP),

❌ Deskilling and dependency

❌ Job loss


✅ AI adoption in the workplace


Article content
Frequency of intentional use of AI tools for personal, work, or study purposes

Researchers noticed that two thirds of people (66%) report intentionally using AI on a regular basis for personal, work, or study reasons.

Two in five (38%) people report using AI on a weekly or daily basis, whereas just over a quarter (28%) use AI semi-regularly (i.e. every month or every few months). One-third (34%) rarely or never intentionally use AI.


✅ Trust in AI is not guaranteed

The study findings reveal that trust levels are low with fewer than one-half of the respondents saying they are willing to trust AI. The findings also show, however, that organizations and educational institutions can strengthen trust by investing in AI education and training, adopting clear governance practices to mitigate risks and ensuring that responsible use is built into the design and deployment of AI systems.


📍 Finally, researchers highlight four essential actions that leaders must prioritize to unlock AI’s full potential and secure a competitive edge for their organizations.

✔️ Transformational leadership

✔️ Enhancing trust

Organizations can strengthen trust in their use of AI by investing in assurance mechanisms that help to mitigate risks, signal responsible, trustworthy and safe use. This will help ensure that users have clear guidelines and training to unlock AI’s full potential. Trust is further reinforced outside the organization by broad societal regulation and AI literacy.

✔️ Boosting AI literacy

Low literacy, combined with limited support and guidance for responsible use, is allowing some to use it complacently, inappropriately and non-transparently. Ongoing training, education and upskilling to achieve AI-driven growth can help to ensure that employees and organizations are able to use AI effectively and responsibly.

✔️ Strengthening governance

The overwhelming majority of survey respondents support multiple forms of regulation. Most (76 %) expect international laws and regulations, co-regulation by industry, government and existing regulators (71%) and support the creation of a dedicated, independent AI regulator (64%). Currently, though, the public has limited awareness of any laws related to AI, which suggests a growing need for public education and communication and further development of legislation in countries where laws and concrete AI policies don’t currently exist.


☝️ 𝙈𝙮 𝙥𝙚𝙧𝙨𝙤𝙣𝙖𝙡 𝙫𝙞𝙚𝙬:

I find this global research truly fascinating because, while discussions around AI often center on its usage and benefits, what’s frequently overlooked are its real-world outcomes. This study highlights a crucial point—without human oversight, AI-generated results can lead employees down the wrong path. Context matters, and the human mind must remain actively engaged to interpret, refine, and steer AI-driven decisions.

One word stands out repeatedly in this research: trust. Trust is the foundation of AI adoption, shaping how effectively organizations integrate it into workflows while ensuring it enhances—not replaces—critical thinking and human judgment. As AI continues to evolve, maintaining this balance will be key to unlocking its true potential.


Thank you 🙏 KPMG and University of Melbourne researchers team for these insightful findings: Samantha Gloede James Mabbott David Rowlands Nicole Gillespie Steven Lockey

Dave Ulrich George Kemish LLM MCMI MIC MIoL


👉 Follow me as a LinkedIn Top Voice on LinkedIn (+30 000), and click the 🔔 at the top of my profile page to stay on top of the latest on new best HR, People Analytics, Human Capital and Future of Work research, become more effective in your HR function and support your business, and join the conversation on my posts.

👉 Join more than 24,000+ people and subscribe to receive my Weekly People Research

Everyday, I share a new research article about People Analytics, Human Capital, HR analytics, Human Resources, Talent,…

#trust #AI #human #leadership

Nicole Gillespie

Chair in Trust, Professor of Management University of Melbourne International Research Fellow, Oxford University

1w

Thanks for your engagement with the research findings Nicolas and your thoughtful comments. The research has a lot of rich insights and we are delighted to see it prompting such thoughtful discussion and dialogue. There is no panacea - however with collective action and conscious decisions by multiple actors operating at various levels, we can enhance the responsible stewardship of these technologies into work and society.

Dr. Bhanukumar Parmar

Industry Veteran | Exploring Future of Work | Great Manager’s Coach & Mentor

1w

AI is the ultimate co-pilot - BUT even co-pilots need a second opinion. Blind reliance leads to turbulence. ✈️. ✅ Yes, TRUST is foundation of AI adoption Nicolas BEHBAHANI, Without it, AI is just automation without accountability. ❓ The key to AI mastery? Knowing when & what to question. AI is a tool, not a truth machine - learn to prompt, verify, & refine. 🔍 🚀 Trust is earned, not automated. Organizations must prioritize AI literacy to transform potential into precision, not just efficiency. 🙏 Huge thanks to the researchers for spotlighting this reality - because AI isn’t replacing human judgment, it’s reinforcing why we still need it. 🔥 Use it wisely, use it well & see the results - you’ll be AMAZED. 🚀

Alize Hofmeester🎯🌱

Change Catalyst | Empowering Leaders to scale change through people, purpose and agility ✦ Author Purpose Driven People ✦ Keynote Speaker ✦ Enterprise Transformation ✦ Obeya Coach

1w

Such an important conversation Nicolas BEHBAHANI Thank you for highlighting this! I'm genuinely enthusiastic about using AI in my day-to-day work and see its value increasing rapidly. But I also recognise exactly what this research points out: the outcomes are only as good as the inputs and the human thinking around it. I’ve experienced firsthand that AI doesn’t always give you trustworthy or useful answers unless you're crystal clear with your questions. And even then, it’s crucial to challenge and double-check what comes back. AI is a powerful co-pilot, but we still have to steer the plane. Blind reliance just doesn’t cut it.

Dr. Masroor Hussain Shah

Fractional CHRO | HR Consultant | People & Culture | Change Management |Talent Management

1w

This is good to know that AI generated data and information is being validated. What I understand that we make good use of AI in research, analysis and creating new things including processes and procedures but just to validate what we can create and design using our own intellect and intelligence. Both personal intellectual work and AI generated work need to be compared. A trust has to be built first. Second, a culture of using AI optimally is required to be developed. Senior management has to take this initiative where the use of AI is made part of daily work. Since the use of AI is evolving, it will take time to integrate AI fully into the daily work culture. Thanks so much Nicolas BEHBAHANI for a great research and bringing up this critical area of professional development. Happy Friday 🤞👋

Michelle Lee 🌱

Strategic Culture and L&D Lead | Talent Management | Culture Transformation Architect | Digital Innovation Champion | Data-Driven People Development

1w

As AI becomes a staple in our daily workflows, fostering a culture of transparency and validation is essential. It's imperative that we prioritize AI literacy and training, ensuring that employees are equipped to critically assess AI outputs. This will not only mitigate risks but also empower teams to leverage AI as a tool that enhances their capabilities rather than diminishes them.

To view or add a comment, sign in

More articles by Nicolas BEHBAHANI

Explore topics