Navigating the Cloud: Early Performance Validation for a Successful Migration - Part V
Hello All,
Welcome to another installment of the series!
Having seen the critical success factor of #performancetesting – #teststrategy – in the earlier post, let us now look beyond - into environment, tools, data, workflow, workload model and workflow scripting in this post.
First, understand the end-to-end critical business workflows that would need to be simulated. These inputs can come from the business team, but do not stop there. Talk to technical architects, production support teams to understand if there are any resource intensive operations. Factor those operations as part of your performance simulation workflows. Once the list of scenarios that are deemed critical are finalized by business, architects and support teams, ensure that all of them are available in the performance test environment. Ideally, having an instance like production scale will help. Most importantly, document the workflows that you have finalized along with the steps involved, share that with the stakeholders involved to have their buy-in and sign-off.
Next, choose a toolkit (scripting, load injection, monitoring) that works well for the engagement from a technical standpoint, skill standpoint and client budget standpoint. Once the toolkit is in place, begin the capture of business-critical workflows typically done via proxy / MIM mode- with the appropriate tool like #jmeter (or any other tool deemed fit) for the purpose. Ensure that each step captured as part of the workflows is properly captured, appropriately parameterized both at the request level as well as header level. Ensure the parameterization of the URLs header manager components, body data, request parameters are done with utmost care. Also ensure that comments are added appropriately for easier future maintenance.
Once the script is built, perform dry runs with sample #data to ensure the script works seamlessly for different users. It is important to note here that as a part of scripting one will need to give thought about test data strategy as well.
When you look at #testdata , you can classify data as follows:
Recommended by LinkedIn
Data is a vast topic, we shall park it here for now and revive it later in a separate series! It is important that as you navigate and play around the #workflows , you will need to identify the data needs and build the necessary test data as desired and requested by business.
Once the scripts have been built, it would be ideal to build some baseline considering scenarios like single user single iteration covering an end-to-end flow, repeating single user multiple iteration coverage consistently. Post this look at building baseline for multi-user single iteration and multi-user multiple iteration. This baseline building approach can help unearth performance issues ground up.
As scripts are developed, group them together into the tools, in line with the workload model, perform load tests with small scale uses to ensure that the scripts all work together seamlessly. Take utmost care and ensure that users and associated data are setup in an appropriate fashion; so that they do not create conflict in the operations or throw up login session issues. As you incrementally group the workflows/scripts together, keep the iterative process of measuring the performance both at the #clientside and #serverside .
One of most critical factors to look at when clubbing workflows/scripts together is to confirm if it is in accordance to the workload model. The workload model is going to have a significant say in the way the performance has been simulated and that would primarily assess if the tests were done in accordance to business expectation.
That’s it for today! Let us look at the test execution, monitoring, results interpretation in another post.