One of the most common methods to measure and verify the quality attributes of a solution is to use scenario-based methods. Scenario-based methods involve defining realistic scenarios that represent the typical or critical situations that the solution will face. For each scenario, the quality attribute of interest, the stimulus, the environment, the artifact, and the response measure are specified. For example, a scenario for measuring the availability of a web application could be: "A user requests a web page from the application. The application is running on a cloud platform with multiple servers and load balancers. The response measure is the percentage of requests that are successfully served within a predefined time limit." Scenario-based methods can be applied at different stages of the solution development, such as during requirements elicitation, design, testing, or evaluation.
-
I agree with this as scenario-based methods provide a practical and effective approach to understanding user needs, validating designs, identifying risks, and facilitating decision-making. They enable designers, developers, and stakeholders to explore and evaluate multiple perspectives and potential futures, leading to better-informed decisions and higher-quality solutions.
Another method to measure and verify the quality attributes of a solution is to use quality attribute frameworks. Quality attribute frameworks are sets of concepts, models, and guidelines that help to identify, analyze, and evaluate the quality attributes of a solution. Some examples of quality attribute frameworks are ISO/IEC 25010, which defines a set of quality characteristics and sub-characteristics for software products; ATAM, which stands for Architecture Trade-off Analysis Method and provides a systematic way to assess the trade-offs among quality attributes; and SEI Quality Attribute Workshop, which is a facilitated process to elicit and prioritize the quality attribute scenarios for a solution. Quality attribute frameworks can help to establish a common language and criteria for measuring and verifying the quality attributes of a solution.
-
I find this helpful as quality attribute frameworks promote consistency and reusability in system design and evaluation. They provide a common language and structure for discussing and documenting non-functional requirements. By using a recognized framework, architects can leverage existing knowledge, best practices, and patterns, improving efficiency and consistency across projects.
A third method to measure and verify the quality attributes of a solution is to use quality attribute tools. Quality attribute tools are software applications or libraries that support the measurement and verification of the quality attributes of a solution. Some examples of quality attribute tools are JMeter, which is an open source tool for load testing and performance measurement; SonarQube, which is a platform for code quality analysis and security scanning; and OWASP ZAP, which is a tool for web application security testing. Quality attribute tools can help to automate, monitor, and report the quality attribute metrics and results for a solution.
-
Being a pictorial reader, I find value in this as quality attribute tools often provide visual representations, diagrams, or reports that facilitate communication and understanding of non-functional requirements among stakeholders. Visualizations help convey complex information and enable effective communication between architects, developers, project managers, and other stakeholders involved in decision-making.
A fourth method to measure and verify the quality attributes of a solution is to use quality attribute patterns. Quality attribute patterns are reusable solutions to common quality attribute problems. They describe the problem, the context, the forces, and the solution in terms of the structure, behavior, and interactions of the components involved. For example, a quality attribute pattern for scalability could be: "The problem is to handle an increasing number of requests without degrading the performance or availability of the system. The context is a distributed system with multiple nodes and services. The forces are the trade-offs between cost, complexity, and consistency. The solution is to use a load balancer, a cache, and a database replication strategy." Quality attribute patterns can help to design, implement, and evaluate the quality attributes of a solution.
-
What I have observed and has helped my teams is the design trade-offs as quality attribute patterns help in making design trade-offs when dealing with conflicting non-functional requirements. In complex systems, it is common to encounter trade-offs between different quality attributes. Patterns provide guidance on balancing these trade-offs and making informed decisions based on the project's priorities and constraints. They help in identifying the impact of design decisions on multiple quality attributes and selecting the most appropriate solution.
A fifth method to measure and verify the quality attributes of a solution is to use quality attribute reviews. Quality attribute reviews are formal or informal processes that involve the participation of stakeholders, experts, and peers to examine and validate the quality attributes of a solution. Quality attribute reviews can take different forms, such as inspections, audits, walkthroughs, or feedback sessions. For example, a quality attribute review for usability could be: "A group of potential users are invited to test the user interface of the solution and provide their opinions and suggestions. The user interface is evaluated based on the criteria of efficiency, effectiveness, satisfaction, learnability, and accessibility." Quality attribute reviews can help to identify and resolve the issues and gaps related to the quality attributes of a solution.
A sixth method to measure and verify the quality attributes of a solution is to use quality attribute experiments. Quality attribute experiments are scientific methods that involve the manipulation of variables, the observation of outcomes, and the analysis of data to test the hypotheses and assumptions related to the quality attributes of a solution. Quality attribute experiments can be conducted in different environments, such as laboratory, field, or simulation. For example, a quality attribute experiment for reliability could be: "A fault injection technique is applied to inject errors into the system and observe its behavior and recovery mechanisms. The reliability is measured by the mean time between failures and the mean time to repair." Quality attribute experiments can help to validate and improve the quality attributes of a solution.
Rate this article
More relevant reading
-
Functional TrainingWhat are the key strategies for combining functional testing and security testing?
-
Web ApplicationsHow can code review tools help you ensure the security of your web application?
-
Application DevelopmentWhat are the best ways to integrate third-party components into an application?
-
Application DevelopmentHow do you ensure secure CI pipelines with third-party libraries?