Part 5: A „Six tau software development process”

Part 5: A „Six tau software development process”

What we haven´t discussed so far is how many failures can be really managed after software integration. There is a limitation on the number of failures to be handled by a single developer within an iteration or sprint. Let us consider a sprint with 4 weeks duration (1 man month effort). Bug fixing after software integration requires effort when going through the error resolution cycle. The error resolution cycle starts with failure detection by a tester, assignment of failure correction tasks to developers, bug fixing by developers and failure resolution confirmation by the tester.

I do have following number in mind resulting from the average of multiple projects: 2.5 errors per developer and per week. 2.5 errors per week x 4 weeks = 10 errors per month. Therefore, an error density of 10 injected test errors per month would consume 100% attention of a developer within a 4 weeks sprint. Consequently, the manageable error density must be lower, e.g., below 5 test errors per month. In any case, even if your historic data reveal a lower effort per test error, there is a limitation on the maximum number of test errors to be handled after software integration.

There are two concrete measures to reduce the number of test errors coming up after software integration: a) reduce the number of injected errors along the development phases (i.e., do it more right the first time) and b) early error detection in those development phases executed before software integration (i.e., reduce escaped errors from left to right V-cycle). I strongly recommend error escape analysis to improve a) and b).

The effort-based error density (EDE) is a valuable metric measuring the number of injected errors per man month (MM) software related development effort.

Effort-based Error Density = (Review + Test Errors) / Development Effort [MM]

At the end of a software project, we capture the total software related effort. Errors found by reviews of requirements, architecture, design artefacts as well as code review errors add-up to review findings. All errors detected by testing add-up to test findings. Of course, you need a guideline about which errors are counted. Static and dynamic code analysis findings as well as model-based engineering findings can be captured as well. However, it is important that findings are always counted in the same way so that different projects can be compared with each other.

Let us assume that there is a new upcoming software project with an estimated effort of 1000 MM. Historic data of multiple projects reveal an average EDE of 10 errors/MM. Hence, we will get an initial prediction of 1000 x 10 = 10.000 errors.

What we don’t know is the distribution of errors across the different development phases (requirements analysis up to customer test). This information will be provided by the error detection profile (EDP). EDP captures the percentage of errors per phase. An alternative representation shows the percentage of remaining errors after completion of development phases.

The picture below shows both the EDE and EDP of several real large-scale software projects. The right-hand picture depicts the percentage of remaining errors after a specific development phase - including all previous phases - is completed. The x-axis of the EDP chart lists from left to right the development phases: ANA = Analysis of Requirements and Architecture, DES = Design: High- and Low-Level, CUM = Coding, Code Reviews, Unit/Module Test, SWT = SW Integration & Verification Test, SIT = System Integration Test, PVV = Product Verification & Validation, CUT = Customer Test.

The vertical line at development phase CUM indicates the percentage of remaining errors after the three phases ANA, DES and CUM are completed. This is exactly the percentage of remaining errors to be handled by the upcoming test phases SWT, SIT, PVV and CUT. The red curves show software projects featuring a quite week error detection profile (85% remaining errors after ANA, DES, CUM completed). Contrary, the green curves depict software projects having a quite strong error detection profile (20% remaining errors after ANA, DES, CUM completed).

Article content
Effort-based error ensity and error detection profile of different software releases.

 

Both EDE and EDP together determine the number of errors after code complete. The software related effort in MM multiplied with EDE gives the total number of review and test errors. The percentage of remaining errors after CUM multiplied with the total number of errors is the number of remaining errors to be handled after code complete. 

Let us consider an example project with 1.000 MM software related effort. The x-axis of the chart below shows the percentage of remaining errors after code complete. The y-axis of the chart shows the effort-based error density. The z-axis depicts the resulting errors after code complete.

The blue pair (EDE,EDP) = (10 errors/MM, 50%) represents 50% remaining errors and an error density of 10 errors/MM. The green pair (EDE,EDP) = (3 errors/MM, 30%) represents 30% remaining errors and an error density of 3 errors/MM. The blue pair results in 5.000 errors and the green pair results in 1.000 errors. There is a factor 5 between the two profiles which has a huge impact on the number of errors to be handled after software integration.

Article content
Number of remaining errors after code complete depending on EDE and EDP.


My recommendation for large-scale software projects is as follows: 1) EDE ≤ 6 errors/MM and 2) EDP at CUM ≤ 40% remaining errors. Note, that 40% of 6 errors/MM results in 2.4 errors which can be handled in a sprint of 4 weeks.

To view or add a comment, sign in

More articles by Karl Dr. Fuchs

Insights from the community

Others also viewed

Explore topics