More than just ‘manual testing’: Recognising the skills of software testers

More than just ‘manual testing’: Recognising the skills of software testers

by Ady Stokes

Discover why the term ‘manual testing’ has limitations and negative impacts on the testing craft and learn to embrace more modern terminology


Join us today at 13:00 (GMT) for This Week in Testing

TWiT’s kinda like talk radio for testers, we share testing insights, swap stories, and wrap up the week gathered together.

See you on stage this afternoon


Why I'm writing this

If you have been working as a software tester or in software development for any length of time, you will probably have come across the term ‘manual testing’. It’s a phrase that can ignite passionate responses. 

And this isn't just because of differing opinions, either. The term often misrepresents the essence of testing. At its worst, the term 'manual testing' can create a damaging divide, undervaluing human contributions while over-glorifying automation. At best, it diminishes thoughtful test activities. 

In this article, I want to discuss why this term might be harmful to the craft of software testing. And I want to suggest how we can shift our focus to a more inclusive and accurate understanding of what all testers truly bring to the table. Who knows if we will ever see the term ‘manual testing’ go away, but we can and must try. 

What does 'manual testing' mean?

Before I continue, let me run some quick questions by you. You don’t have to answer now, but bear them in mind when reading the rest of this article. 

  • Does a musician play music, or do they ‘manually play music’? 
  • Could you ever imagine yourself saying ‘manual’ to describe the work of another profession like painter, actor, sculptor, doctor, scientist, chef, philosopher, designer, innovator? 
  • When humans move, do they manually walk, talk, breathe or think? 
  • Have you ever heard the phrases: manual accessibility testing, manual security testing or manual usability testing? 

The phrase 'manual testing' is often used to describe testing activities performed by humans without the aid of automated scripts or tools. On the surface, it seems harmless enough, a simple way to distinguish between human-driven testing and tool based testing. 

Adding ‘manual’ to describe testers' work became common after the rise of a credible alternative, which was automation. As people were talking about automation testing, the term manual testing crept in. It should not have had a long shelf life. However, trends along the lines of ‘testing is dead because…’, which started with automation and continues through today with AI replacing testers, have kept this limiting and inaccurate label alive. 

Testing is not simply about how tests are executed (manually or automated). It’s a multifaceted process that requires exploration, critical thinking, and creativity. These activities cannot be reduced to a binary measure of 'manual' versus 'automation.' Worse still, the term 'manual tester' can inadvertently imply 'less valuable' or 'tech-lite,' leading to the perception that these testers are simply executing predefined steps of a script without the need for deeper analytical or investigative skills. This misconception not only does a disservice to the profession, but to the brilliant testers who happen not to create or like doing automation and, more importantly, to the quality of software itself.


There is simply nothing else like it.

Join TestBash this October. 📍


The problems with the 'manual tester' stereotype

Undermining the craft

Labeling testers as 'manual' reduces the role to button-clicking and step-following, ignoring the depth of expertise required to uncover edge cases, understand complex systems, and empathise with end-users. It trivialises the intellectual rigor and depth of thought that testers bring to identifying risks and improving software quality. This in turn makes outstanding exploratory testers, including those that explore deep into the heart of software, seem less valuable than a bunch of unit tests. 

Creating a false hierarchy

The rise of automation has brought undeniable benefits to testing, but it has also created an unintended hierarchy whose benefits to software quality are questionable at best. 

'Automators' are often seen as more technical and therefore, by extension, more valuable. This overlooks the fact that automation is a tool for testing, not a replacement for it. Designing effective automated tests requires a deep understanding of the testing process. So it is not about testers versus automators, since automators are, or should be, testers first and foremost. 

Different types of testing add value and contribute to testing and quality in many varied ways. When organisations value one type of testing over another, they often do so in a 'penny-wise, pound-foolish' way. They might undervalue the exploratory tester who through deep thought uncovers several high-priority bugs that, if in production, could bring the system down. (Think high-profile banking apps.) They could undervalue the accessibility tester who, by helping improve ease of use for a variety of people using only keyboards or screen readers, increases sales by over 10 percent. Yes, automation gives us stability and confidence, but it is only one type of contribution to overall quality and value. 

Misaligned expectations

By focusing on the method rather than the outcome, the term 'manual' shifts the conversation away from the purpose of testing: to uncover information about the product. This can lead to organisations undervaluing exploratory testing, where human intuition and adaptability are irreplaceable.

Barrier to growth

For testers themselves, being pigeonholed as a 'manual tester' can limit career opportunities and professional growth. It perpetuates the myth that testers need to 'move to automation' to advance, rather than embracing and honing the broad spectrum of skills that testing requires. It has created a culture where it's easy (but mistaken) to believe that getting a job as an SDET (software development engineer in test) or automation tester elevates you above a quality assurance specialist or test engineer. Depending on the roles themselves the levels of skill might vary greatly even at the same level. 

Little or no interest in automation can prevent testers from applying for jobs where ‘some automation’ is an added-on ‘grab’ of skills by employers. Some job descriptions these days appear to require three or four specialities rolled into one person! Being labelled ‘manual’ might mean people are overlooked when promotions become available. Being perceived as 'less than' can follow you through your career and that is not fair to anyone.  


Calling ALL leaders for a one-day educational experience to help expand quality engineering and testing practices across businesses.

Leading With Quality

Brighton - 30th Sep 2025


Why the term matters

Language shapes perception. By continuing to use terms like 'manual testing,' we inadvertently reinforce the idea that testing is a divided discipline, where human skills are somehow lesser than automated processes. This is not only inaccurate but harmful to the evolution of the craft.

Instead, we need to recognise testing as a cohesive whole. Automation is a powerful tool within the tester’s toolbox, but it is not the craft itself. Testing—in all its forms—is about learning, questioning, and providing insights. It’s about uncovering risks and ensuring that the software we deliver meets the needs of its users.

A new perspective: Collaborative testing

Rather than framing testing as 'manual (versus / or) automation,' we should embrace a collaborative approach. Automation can enhance testing by handling repetitive tasks, freeing testers to focus on more complex and exploratory activities. Human and machine working together is not a competition; it’s a partnership.

When we shift our language to reflect this mindset, we also shift our culture. Terms like 'exploratory testing,' 'investigative testing,' or simply 'human-driven testing' better capture the value that testers bring, emphasising their analytical and creative contributions. Or simplest of all, just say 'testing' if it isn’t automated. 

Beyond ‘manual testing’: What else can we say? 

I’m yet to see a sentence that uses the term 'manual testing' that is fundamentally changed or becomes less understandable if you simply remove the word 'manual' or if you use a more precise description. Let’s try a couple. 

Here's a typical job description for a position that is not centered on test automation. What happens if you remove 'manual'´from the title? Is any meaning lost?

"We are hiring a Manual QA Engineer to join our team!  

Next, try reading the article Manual testing for beginners: A comprehensive guide and mentally replacing the phrase 'manual testing' with 'end-user testing.' 

Do we lose any clarity simply by dropping the word 'manual' or using a more precise term that's suitable to the context? Not really, but feel free to let me know if you disagree. So if you start writing the word 'manual,' just stop and you can mostly just continue on your way. 

I’ve seen some folks recommend using ‘exploratory testing or tester’ instead of manual. I can see the attraction and I have used it in the right context. But not all human-centric testing is exploratory in my opinion. And I know some might say even if we are following a script, our brains should still be switched on. If we see something not quite related to the script we should absolutely take note. But are we completely convinced that happens every time? 

Alan Julien stimulated a great conversation on LinkedIn and through that came some interesting suggestions such as: 

  • Investigative testing 
  • Analytical exploration 
  • Strategic validation 
  • Cognitive testing 

What other replacements can we suggest? 

To sum up

By moving away from divisive language like 'manual testing,' we can foster a more inclusive and accurate view of the craft. Let’s celebrate the skills, knowledge, and curiosity that testers bring to user-focused and automation alike and focus on what truly matters: delivering great software and enhancing value to the business and users alike. 

I firmly believe it would strengthen the craft if we could all: 

  • Drop the use of ‘manual’ to describe testers. If you want someone who doesn’t do automation, put ‘exploratory testing’ in your desired skills. 
  • Challenge the term in articles and job descriptions if you see it. 
  • Educate colleagues and stakeholders about the value of all human-driven testing in all its forms. 
  • Focus on skills rather than titles. Testers can be great at many things. 

For more information



Profile card for Ady Stokes, a Freelance IT and Accessibility Consultant. The card includes a photo of Ady Stokes wearing glasses and a blue jacket with a visible 'MoT Ambassador' badge. Icons for LinkedIn, a butterfly symbol, and a globe are shown. Pronouns: He / Him. The text describes Ady as a freelance IT consultant and accessibility advocate, curator of the Essentials Certificate STEC, and co-runner of Ministry of Testing Leeds. He is an MoT Ambassador who teaches, coaches, and provides training.
10 years ago, I couldn't have imagined a community like MoT. Thanks to the TestBash conferences, Dojo online content, Club forum, we're breaking new ground in testing and spreading new ideas and practices at an exponential rate. I'm so proud to be part of such an inclusive and supportive community. - Lisa Crispin
The MoT pro membership allows me to watch the complete history of #TestBash talks. Furthermore, I can attend all the virtual TestBashes and 99 Minute Workshops without additional costs. So much value. - Thomas Rinke
Andreas Rother

Sole Trader | Application Server-side of life

1w

Exciting

Like
Reply
Donna Hetherington-Neal

Business Analyst & Senior Test Analyst - returning to the industry as a Business Tester (Outcomes Verifier)

1w

Excellent article Ady. Liked all statements here but especially like this "Designing effective automated tests requires a deep understanding of the testing process." I would like to add that it requires also understanding of the principles of testing too like "exhaustive testing is not the goal as it is way too expensive". Risk based testing is better. Just because automated tests run fast, does not mean that you can/must/should test everything. Running and maintaining an automation suite is extremely costly. In my opinion every test must only be written and run if it is of "business or customer value" to the organisation i.e. if failure of the piece of software under test will adversely impact customers/end users or, business processes in the system impacting customers/end users. OR, if the software failure will increase expense/profit loss/loss of reputation etc. I will take it that when you say "deep" understanding that you are also meaning all aspects of testing - high level scoping principles along with detailed activities. Thank you so much for putting this article together.

Ed Castle

Senior Tester @ Aptem | Agile Software Development

1w

This is really timely, having just a few days ago given a presentation to devs about the whole manual /exploratory/automation testing. Agree wholeheartedly on the 'manual' connotations. 👏

To view or add a comment, sign in

More articles by Ministry of Testing

Insights from the community

Others also viewed

Explore topics