Dean does QA: AI-powered Software Testing, Lessons from Siemens & Shopify
WAIT! Prefer listening? Join in on the podcast.
Recently, Shopify CEO Tobias Lütke made headlines on Business Insider with a bold internal memo:
Before hiring anyone new, teams must first prove that AI can’t do the job.
In the memo, pointedly titled “AI usage is now a baseline expectation”, Lütke stressed that “using AI well is a skill that needs to be carefully learned” and that AI use is “now a fundamental expectation of everyone at Shopify”
In other words, no project or role is exempt from the question: “Why can’t AI do this?” This mandate isn’t just a quirky Shopify policy; it’s emblematic of a broader trend sweeping the entire tech industry. From software development to quality assurance, companies are weaving artificial intelligence into everyday workflows.
The goal? To supercharge productivity, reduce routine toil, and even rethink what roles humans versus machines should play.
AI Becomes a Baseline in Tech Workflows
Shopify’s AI-before-hiring rule underscores how rapidly organizational mindsets are shifting. Lütke himself noted that AI has been “the most rapid shift to how work is done that I’ve seen in my career” in an interview with The Verge . In 2023 and 2024, similar discussions echoed across boardrooms and engineering stand-ups worldwide. Organizations large and small are asking: “How can we use AI to do this faster or better?” This reflexive drive to integrate AI – what Lütke called “reflexive AI usage” – is becoming the norm rather than the exception.
Nowhere is this trend more apparent than in Software Testing and Quality Assurance (QA). Long seen as labor-intensive but critical, software testing is being transformed by AI-driven tools. Just as generative AI (think ChatGPT) has shaken up content creation and coding, revolutionairy AI-powered testing solutions like SQAI Suite are reshaping QA workflows.
Test automation was already on the rise, but AI takes it to a new level – from self-maintaining test scripts to predictive analytics that spot bugs before code even runs. It’s telling that Shopify’s memo wasn’t just about cutting hiring costs; it reflects a genuine belief that AI can shoulder a lot of the testing, debugging, and validation work that would traditionally require more human testers or developers.
For the sceptics out there, this isn’t mere hype: it’s backed by data. Recent industry surveys and analyses show a clear surge in AI adoption within testing workflows. Let’s dive into the numbers and trends shaping AI in software testing from 2023 through today, and where experts think we’re heading by 2027.
AI’s Impact on Software Testing: By the Numbers
Not long ago, “AI in software testing” was experimental for most teams. That’s changing fast. Between 2023 and 2025, the adoption of AI-powered testing tools more than doubled – from only about 7% of QA teams in 2023 to roughly 16% in 2025 according to Testlio . While 16% is still relatively small, the sharp jump signals growing interest in AI-driven test automation, defect prediction, and analytics. In other words, what was once bleeding-edge (only 1 in 14 teams using AI) is quickly becoming mainstream (1 in 6 and rising).
Other surveys echo this momentum. The Tricentis World Quality Report 2023-24 found 75% of organizations are now consistently investing in AI to optimize QA processes, with 65% saying the primary benefit is higher productivity. Many teams started by experimenting with AI on a few projects and are now rolling it out more broadly as they see gains in speed and test coverage. In fact, AI-augmented testing is accelerating key QA activities: one report notes 39% of teams have seen efficiency improvements in test automation thanks to AI, along with better test maintenance and smarter defect prediction. The message is clear: AI isn’t just a gimmick in testing – it’s delivering real value.
Generative AI, in particular, is a game-changer for QA. Imagine describing a scenario in plain English and letting an AI generate test cases automatically – this is now reality. According to a recent industry survey conducted by Testlio , 68% of organizations are either already using generative AI for test automation (34%) or have pilots and roadmaps to do so (another 34%). Of those early adopters, 72% report faster testing processes after integrating generative AI.
For example, SQAI Suite can draft test scripts, create synthetic test data, or even simulate user interactions, dramatically cutting down the time testers spend on repetitive script writing.
What about the future? According to mabl , forecasts through 2027 suggest AI in testing will go from a competitive advantage to an outright necessity. Gartner analysts predict that by 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchains – a massive rise from just ~15% in early 2023. In other words, four out of five companies will be using AI to assist in testing within just a few years. This trajectory mirrors the broader AI boom (Gartner also forecasts global spending on AI software nearly tripling from $124B in 2022 to ~$300B in 2027). For QA teams, it means that if you’re not exploring AI by now, you risk being left behind.
Why such rapid adoption? Software systems are only growing more complex – think multi-cloud applications, IoT integrations, and constant updates in DevOps cycles. Traditional testing methods struggle to keep up with the speed and scale. AI offers a path to “hyperautomation” in testing, where up to 90% of testing processes could be automated by 2027 (as one bold prediction from QualiZeal claims). While 90% automation may be aspirational, the direction is clear: AI will handle more of the heavy lifting, from generating and executing tests to analyzing results, so human testers can focus on strategy, edge cases, and innovation.
Crucially, AI isn’t here to replace QA professionals but to augment them – a point industry leaders often emphasize. As the CEO of testing company Tricentis notes,
The integration of AI with testing levels the playing field for all team members, regardless of technical skill level. This allows even citizen testers to play a greater role in testing for fewer errors and greater productivity, leading to faster time to market and lower costs.
In other words, AI can empower less-technical team members to contribute more (via natural-language interfaces or automated suggestions), while enabling seasoned engineers to be far more productive. It’s a classic case of “augmentation, not replacement” – the AI handles the drudge work and crunches data, humans provide oversight, creativity, and critical thinking.
Case Study – Siemens Digital Industries Software: AI-Powered Testing in Action
To understand how this AI-driven testing revolution plays out in practice, let’s look at a real-world example. Siemens Digital Industries Software , a leader in engineering and enterprise software, has been embedding AI across its product lines. In particular, Siemens’ Tessent Silicon Lifecycle Solutions family of testing tools (part of its electronic design automation portfolio) provides a compelling case study of AI in testing. Tessent is used for semiconductor testing and design verification – a domain where test efficiency and accuracy are paramount (think of the tiny microchips that power everything from cars to smartphones; ensuring they’re defect-free is a huge challenge).
Siemens’ Tessent products leverage three types of AI – analytical, predictive, and generative – throughout the testing process. According to Siemens, the Tessent approach is to solve problems as much as possible algorithmically, freeing engineers to focus on higher-value decisions. Here’s how each AI type comes into play:
What impact has this multi-pronged AI strategy had for Siemens and its customers? Significant ones. By infusing AI into Tessent, Siemens achieved dramatic efficiency gains – for example, one result of Tessent’s AI-driven optimizations is a 10× faster test architecture implementation and up to 5× shorter testing times for complex scenarios. In industries where getting a product to market even a week sooner can mean millions in revenue, those improvements are game-changing. Additionally, Tessent’s AI features help reduce the manual effort and guesswork in test development. Engineers no longer need to babysit the testing tools or constantly adjust settings; the AI algorithms handle much of that. As Siemens puts it, their AI-enabled tools deliver:
Recommended by LinkedIn
Predictable, repeatable and verifiable outcomes without unpredictable AI hallucinations
A nod to the importance of reliability in enterprise AI solutions. The tools aren’t just smart; they’re trusted (they have to be, if they’re testing safety-critical systems like automotive controllers or medical device chips).
Siemens’ case illustrates a broader point: AI in software testing isn’t limited to web apps or simple test scripts. It’s being applied in highly complex domains (semiconductors, in this case) with great success. Whether it’s auto-generating test patterns for a chip, using predictive models to slash test times, or employing generative AI to suggest new tests, Siemens Tessent showcases the spectrum of AI’s value. For QA leaders in any industry, it’s a hint of what’s possible. If AI can help test something as intricate as a modern processor – which has billions of transistors – it can certainly help with enterprise software testing, mobile app QA, and beyond.
New Roles and Skills in an AI-Augmented QA World
As AI becomes ingrained in software testing and development, the skill set required of professionals is evolving. A tester’s job in 2025 looks quite different from a tester’s job a decade ago, and by 2027 it will change even more. Rather than rendering human testers obsolete, this AI infusion is giving rise to new roles and competencies. Forward-looking organizations are already investing in upskilling their teams to meet this challenge.
We’re seeing the rise of roles like AI QA Strategist or AI Test Architect, who are responsible for integrating AI tools into the QA process and ensuring they align with testing goals. These folks need a blend of traditional QA knowledge and AI savvy. In many organizations, “AI champions” or “automation coaches” are emerging: individuals who train the rest of the team on how to use AI-driven testing tools effectively, how to interpret AI outputs, and how to maintain oversight (since AI is not infallible).
In development teams more broadly, a new role of “AI Engineer” is gaining traction. This isn’t just a rebranding of data scientists; it’s a cross-disciplinary role for software engineers versed in AI integration. Gartner analysts foresee that rather than reducing the need for human talent, AI will shift demand toward professionals skilled in software development plus data science and machine learning. These “AI engineers” will design and implement AI solutions at scale, such as custom AI models for test analysis or CI/CD pipeline optimizations. Importantly, they’ll also ensure AI is used responsibly and transparently in the software lifecycle.
For the average software engineer or tester, what does this mean? Primarily, continuous upskilling. A recent Gartner study warns that by 2027, 80% of software engineers will need to upskill to remain relevant in this AI-centric era QA professionals will need to become comfortable working with AI – treating it as a teammate or tool that they direct. That includes learning how to configure AI-driven test platforms, how to validate or double-check the AI’s findings (for instance, confirming that an AI-generated test truly covers the intended scenario), and how to interpret analytics dashboards that might be powered by machine learning.
Another crucial skill set revolves around AI literacy. According to DataCamp 's State of Data & AI Literacy Report 2024, a whopping 62% of business leaders say there’s an AI skills gap in their organization, yet only about 25% have implemented organization-wide AI training programs to address it. This gap is particularly risky in QA and development teams: if your staff doesn’t understand how AI tools make decisions or what their limitations are, you could misuse the tools or miss important bugs. AI literacy means having at least a fundamental understanding of concepts like machine learning, model bias, and data privacy, so that teams can use AI judiciously. It’s not necessary for every tester to become a machine learning researcher, but they should know, for example, that an AI-based test prioritization tool might overlook certain edge cases if it wasn’t trained on those patterns – and thus human insight is still needed.
Leading organizations are responding by investing in training and culture. Some have instituted “AI literacy 101” programs for their engineering teams. Others are hiring experts or consultants to help incorporate AI into their QA strategy. The European Union is even making AI training a compliance issue: under the upcoming EU AI Act, companies deploying AI systems will be required to ensure their staff have sufficient AI knowledge and training. All this underlines that human talent development is just as important as tool development in the journey to AI-augmented testing.
It’s also worth noting the cultural shift happening in parallel. Traditionally, QA was sometimes seen as a silo or a step late in the process. Now, with AI enabling testing to happen continuously and earlier (e.g., AI code analysis catching bugs as code is written), the lines between developers and testers are blurring. Many developers are now responsible for writing tests, and many testers are contributing to requirements and design – especially when AI handles the mundane parts of testing. This calls for a quality engineering mindset where everyone on the team takes ownership of quality, supported by AI assistants.
Expert Insights: Humans + AI = The Future of QA
The blend of AI technology and human expertise is summed up well by QA thought leaders. “Automation, no matter how intelligent, must complement the people powering quality,” says a reports by testing startups mabl and SQAI Suite . In practice, this means AI takes care of repetitive task so that QA engineers can focus on creative, critical thinking – designing clever test scenarios, exploring the application for unexpected behaviors, and improving user experience. Even as AI runs thousands of scripted checks, human testers are needed to think like users, to question assumptions, and to validate that the product as a whole makes sense.
Leaders in enterprise software echo similar sentiments about balancing AI and human input. As Satya Nadella (CEO of Microsoft ) put it recently,
At the end of the day, all software is built to help people. AI is just one more tool – a very powerful tool – that we have to get there
The takeaway: AI is not a silver bullet that magically guarantees quality. It’s a force multiplier – amplifying the skills and efforts of your team. Organizations that recognize this are the ones seeing the best results with AI in testing. They treat AI as a junior colleague or an assistant with superpowers, not as a replacement for human judgment.
Even Shopify ’s Lütke, who set a high bar with his AI mandate, is essentially pushing for this human-AI partnership. His challenge to employees – prove you need a human by first considering AI – is sparking “fun discussions and projects” within Shopify. Teams are creatively exploring how AI agents could handle parts of their work, which in turn is upping everyone’s AI proficiency. Lütke noted that “using AI effectively is now a fundamental expectation” and that it “needs to be carefully learned by…using it a lot”.
This encapsulates the new reality: hands-on experience with AI is the new normal for career growth in tech. The companies (and individuals) who experiment and learn will thrive; those who sit on the sidelines risk stagnation.
Conclusion: Embracing the AI-Driven Future of Testing
AI’s transformative impact on software testing is no longer theoretical – it’s here, now, reshaping how QA teams work at companies like Siemens, Shopify, and beyond. From dramatic productivity gains (10× faster testing in Siemens’ case) to new collaborative dynamics between human and AI testers (as Shopify is fostering), the changes are profound. For software engineers, QA professionals, and tech executives, the message is clear: embrace AI as a partner in quality. This means investing in the right tools and in your people, rethinking processes to incorporate AI feedback loops, and staying open to continuous learning.
The journey won’t be without challenges. We must be vigilant about AI’s limitations – for instance, AI might miss a critical scenario or produce false positives – and ensure robust governance (ethical AI use, bias mitigation, etc.) in our testing tools. But the potential rewards – faster releases, higher-quality software, happier teams freed from drudgery – are compelling.
As we stand at this inflection point, it’s worth asking:
How do you see AI changing the way we ensure software quality?
Are you leveraging AI in your testing processes, or planning to? And importantly, how can we as professionals stay ahead of the curve, mastering AI rather than fearing it?
Please share your experiences, concerns, or predictions in the comments. What’s your take on the AI + QA revolution, and how are you preparing for it? 🤖✨🎯
Dean Bodart, aI's impact on testing opens new horizons for growth. Adapting our skills will be essential. 🚀 #AIinTesting