No, Seriously. What Does a Software Tester Do?

While I discussed a draft of my last article, The Need for a Software QA Role at Microsoft, with someone I had asked for feedback, they raised the question what value testers should add to a development team. Though testing is of course only one aspect of software QA, it is the one most people are familiar with. And yet, it is at the same time widely misunderstood. Because the most basic definition that was mentioned during our discussion, that I'd heard many times before, on occasion even from seasoned testers, was that testers find bugs.

But testers do find bugs!

Yes, testers find bugs. The problem is the misconception that that's the core responsibility. It's not. It is merely an outcome of testers doing their job. But before I go into what that actually is, let's consider this statement for the sake of argument. What if that were the core responsibility of a tester?

Looking for experienced bug catcher

First of all, it's not a particularly useful job description to hire for. Everyone who uses a computer finds bugs. Regular users, power users, system administrators, developers, and, of course, testers. That's not to say that everyone is equally good at it. Casual users may even encounter and attempt to work around bugs without realizing that that's what's happening, while being able to accurately and consistently identify and report defects requires deep knowledge of the domain and system under test as well as a certain level of development skills (sometimes an otherwise perfectly valid bug report is completely useless without a memory dump or at least a stack trace). At the end of the day though, how many issues a prospective tester may find or did find in their last job is in and of itself meaningless for judging said tester's abilities. The real question, which hints at the correct answer to the initial questions, is, how many bugs did they miss.

Test exit criteria based on bug counts

Furthermore, if testers' main responsibility were to find bugs, how would one define test exit criteria? Fully testing any but the simplest programs is practically impossible due to the inevitable combinatorial explosion, so attempting to find all bugs is a nonstarter. Alternatively, one could try to define some absolute or relative bug count goal. But that's a nonstarter as well because it simply doesn't make sense. Is testing done after finding 10 bugs? 100? After at least 80% of the last bug count? After finding at least one data loss issue? For one, we'd be completely ignoring an actually important key metric in software testing, test coverage. Equally important, what if all the scenarios the testers decide to test work as expected?

Finding bugs is an outcome, not the goal

In fact, in a perfect world, testers would never find a bug. Ever. Developers would simply write defect-free code. Always. So in a perfect world, testers are actually unnecessary. But the world isn't perfect, so somebody has to make sure that the code is of high enough quality to ship. In other words, testers are a safety net or, better yet, gatekeepers. Their role is not to find bugs. It is to make sure that a carefully selected set of scenarios works. But being perceived as a safety net has certainly led to some development teams not taking quality and defect prevention during design and implement seriously and instead relying on testers to "find the bugs", possibly creating and reinforcing this misconception about what testers do in the first place.

So, once more for absolute clarity: A tester's job is to validate the product. It is not to log bugs; that's merely what they need to do when tests fail. And, to put it bluntly, tests fail when developers don't do their job, which - quick reminder - is to write code that works. Not to write code that has some chance of working and throw it over the wall. Admittedly, that's a bit harsh. It is completely acceptable, and to be expected, that testers find bugs in edge cases, especially during end-to-end testing. In fact, that's one major reason why testing teams are an asset. They allow developers to focus their testing on units and components, leaving end-to-end testing and specialties like performance and security testing to the testers. But that must not be an excuse for developers to do no testing whatsoever (I once had a developer actually answering the question "Did you test it?" with "It compiles," and it wasn't a joke).

A difference in mindset

I can hear some unseen readers object that all I'm doing is arguing semantics. Because it is impossible to achieve full test coverage, testers pick a certain set of scenarios to go through. What difference does it make if they approach it as trying to find all bugs in those scenarios versus validating them? Superficially, none. In reality, it shows a completely different mindset as evident in a discussion I witnessed before becoming a tester myself.

Some testers working on the same product as me were talking about their test suite with one of them posing a question. "We have hundreds of test cases and they're not finding any bugs. Maybe we should get rid of them?" I only heard this in passing so I don't know what the outcome was. But I do remember that this attitude wasn't met with immediate and stern opposition. A few years later I learnt that no tester should ever ask this question.

Building a good test suite for a complex product usually takes anything from a few dozens to thousands of man-years. That, in business terms, is a serious investment. For that reason alone, throwing away assets that literally cost millions of dollar to build is questionable at best. Moreover, (regression) test suites aren't primarily useful because they find bugs. They are useful because they confirm that the scenarios they cover work. And not just for the current release, but for all future releases as well. As recent regressions in high profile software products demonstrate, just because something worked for 20 years doesn't mean there won't be regressions at some point. Of course, if a testing team removes test cases that don't find bugs, this is bound to happen. As a tester, one always wants a test suite that's as comprehensive as reasonably possible. That doesn't mean that there'll always be time to execute all of them (typically, there isn't) and test cases that tend to not find bugs can and probably should be run less often. But they are not removed from the suite. They are removed when the scenario they're covering doesn't exist anymore and not a moment earlier.

So, next time somebody says to you that testers find bugs, please politely correct them and tell them: "No. They validate products."

To view or add a comment, sign in

More articles by Dennis Dietrich

Insights from the community

Others also viewed

Explore topics