The Four C's of Requirements Testing

The Four C's of Requirements Testing

According to world statistics, the main source (about 56%) of bugs is requirements (no offense to Business Analysts), the second one is design errors (about 27%), coding errors come forth (with 7%) after other errors (10%). The best way to decrease the amount of new open bugs is Introducing a requirements analyze stage which has to be done before coding of this functionality starts. This type of bugs is the most expensive to fix (requirements errors - 82%, design errors - 13%, coding errors - 1% of the total cost to fix defects) and their cost grows exponentially over time. I think, no one knows a product as good as a tester who works with it every day (no offense to Product Owners and Business Analysts). QA department has to check all specs (requirements specification, wireframes and mockups) for completeness, clearness, correctness, consistency testability.

Let's imagine we have a requirement for a registration page which says “When a user enters a valid email address in the field and click on a green submit button he should see a message about successful registration. If a user enters an invalid email address then he has to see an error message In a user-friendly format. When a user starts entering something in the text field a “Submit” button (related to this field) should become disabled”.

 Completeness. A requirement must contain all the information necessary for developers. QA engineer should check that all possible scenarios have been considered in requirements, he should try to find any gaps or uncovered cases. We should ask yourself “Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?”

In our example with a registration page, we can ask: What should we do If a user retains this field empty and click on the button? What should we do if a user with this email address already exists in the system?

 Clearness. Requirements should be transparent and clear for everyone. One and only one interpretation is possible. We should try to find all ambiguities in requirements, only generally accepted terms should being used (without slang and jargon). Good requirements also should not have anything like "probably", “as far as possible ”, “adequate”, “as much as practicable”, “fast”, “rapid”, “optionally”, "etc.", "and so on and so on", "and the like"...

In our example, we should ask our BA to clarify what “valid” and “invalid” email address mean. Are the email addresses with special characters (like $,±) invalid? What is the maximum length of a valid email address?

Correctness. All statements should be correct, truthful and have common sense. How correct is our requirement? Is this really what is required from the system or did someone make a mistake/misprint while writing the requirements?

Back to our example, there is an obvious mistake or typo in the 3rd statement of our requirements, of course, it should say “When a user starts entering something in the text field a “Submit” button (related to this field) should become enabled”.

 Consistency. Requirements should not contradict other requirements and fully comply with external documentation and with internal and/or external standards. We should try to find all requirements which contradict to each other or to external (or adjacent) systems' requirements. It could be an obvious case when 2 requirements assert the opposite and it could be hidden where a contradiction is not so clear at first glance.

Let’s imagine that in addition to considered above requirements about the registration page we have requirements (from the functionality section of the special buttons) which say: “All "Submit" buttons should be blue throughout the whole application.” We have an obvious contradiction with the 1st statement “When a user enters a valid email address in the field and click on a green submit button he should see a message about successful registration.”

Pay attention to the general wording in the requirements (from our example - All "Submit" buttons should be blue throughout the whole application”). If such requirements are found, put it in a "dangerous requirements" list and then analyze all other requirements for a contradiction with them.

Also, helps to divide into categories (for example, to allocate all the requirements regulating the buttons, pop-ups, fields, etc.) and to revise them in a directed manner to the subject of contradictions. Distinguish all requirements that are traced to one upper-level requirement and analyze such set (usually inconsistency refers to one top-level requirement, of which there are several lower-level requirements from different sections that contradict each other).

 Let's play a game :) I prepared 3 simple statements and you will have to identify whether they correct or not.

1)1st sentence: Some engineers are able to analyze requirements. 2nd sentence: All testers are engineers. Conclusion: Some testers are able to analyze requirements. Do we have any сontradictions or something suspicious there? Is It a correct conclusion?

 2) For the next exercise let’s assume that all QA engineers have the same skill set and productivity. The statement: “If 5 QA Engineers create 5 autotests per 5 hours, therefore (conclusion) 10 QA Engineers write 10 autotests per 10 hours.” Is it a correct conclusion?

 3) A pen and a pencil cost 1,50$, the pen costs by 1$ more than the pencil. Therefore (conclusion) the pencil costs 50 cents. Is it a correct conclusion?

Answers:


  1. It's pretty convincing and at first sight, It seems like a correct conclusion but It is not. Because all testers could belong to the part of engineers who is unable to analyze requirements. We have a set of engineers, a subset of engineers who can analyze and who can’t, the subset of testers could be fully contained in the subset who can’t.
  2. This seems to be true and so easy, 5:5:5 transforms into 10:10:10 and looks perfectly, but the conclusion is wrong. According to the first part of the statement 5 engineers to write 5 autotests need to spend 5 hours, if we increase the number of tests by 2 times and increase the number of engineers (who have to create them) by the same amount (2 times) then we will have the same timing - 5 hours.
  3. Again, looks so obvious that it is true and you don’t want to even think of it, but it is not true.

Let x: The cost of a pencil, y: The cost of a pen According to the statement, we get a System of Equations: x+y=1.5; y=x+1, Substitute (x+1) for y in x+y=1.5 x+x+1=1.5, Add (-1) to both sides and simplify 2x=0.5 => x=0.25, Substitute (0.25) for x in y=x+1 y=0.25+1 => y=1.25 The cost of a pen is 1.25$, the cost of a pencil is 0.25$,

 Testability. There is should be a way to check if implementation meets a requirement or not. Is It possible to verify requirements satisfaction? How to do that? What data, tools do we need?

One of the implementations of our requirements from the example will be difficult to test. Let’s revisit this statement “If a user enters an invalid email address then he has to see an error message In a user-friendly format.” What the expected results will be for this “user-friendly format”? How to validate If it user-friendly enough or not?

Testable products are easy and less costly to maintain and chances of achieving customer (or user) satisfaction with such products are much higher. That is why testability is an important attribute to the maintainability of any software product.

On top of that, if you find (during requirements analysis) that it is impossible to test functionality due to some kind of restrictions of the system you can always ask a Product Owner to include a task for developers to implement something (like handler, dummy data or other workarounds) to overcome this barrier and make It eventually testable. For example, if you have such requirements like "When an authorization service is down we should display a warning message on the top of the main page". To verify the satisfaction of this requirement (and automate this scenario) we will need to have an ability shut this service down and check the behavior of your system. For these needs, you can ask to implement such handler (only-for-testing-environments-opened-API-call) which provide an ability to bring the application to this state (in which this authorization service is down). Another one example would be, we have a requirement which says "We have to implement a registration functionality in our web application". This type of pages contains a captcha which protects from brute-force attacks. That means it will be impossible to automate verification of this functionality. But we can ask to add in a scope of this story an additional task to provide an ability to overcome this barrier for autotests in testing environments exclusively. It could be an environment property which disables the captcha or It could be a static key (code) you can use to put in the captcha field to pass the validation.

 Good requirements should be clear and strict, with no uncertainty and ambiguity, should be measurable in terms of specific values, should be testable having a way to confirm that each requirement has been implemented properly, should be full, without any contradictions

You will probably encounter some other attributes of “good” requirements, like: 

  • Necessity
  • Priority
  • Traceability
  • Сonciseness

But all of them overlap with or included into already considered ones.

 Requirements testing helps to make requirements clearer and avoid situations in which we missed some dependencies or implemented something that we didn't have to implement or something we can't find the way to validate. It saves a lot of QA and DEV teams efforts and time. You will have to be done with all analysis until the iteration (if you are in Scrum) starts and developers mustn't write any lines of code until requirements are tested and everyone is on the same page about acceptance criteria for this functionality.

 To have a success with requirements analysis you will need to:

  • Have an analytical mindset
  • Not hesitate to ask any questions to Product Owner or Business Analysts
  • Have good domain knowledge (expertise) in the field
  • Review the tracing up and down (for business requirements and for low-level requirements - design, layouts, detailed implementation description)
  • Pay attention to general wordings
  • Put yourself in the Product Owner's, Business Analyst's or regular user's place and try to imagine whether this requirement is clear to you
  • Be guided by common sense and experience.

 

 

Rahul K.

Quality Assurance Specialist at TELUS International India | MUFG Union Bank (Ready for Onsite Opportunities)

7y

Can you please explain this verbiage once again --- When a user put a cursor in the text field a “Submit” button (related to this field) should become disabled”. Thanks

Like
Reply

To view or add a comment, sign in

More articles by Evgeny Tkachenko

Insights from the community

Others also viewed

Explore topics