Task: Design Performance Test
This task focuses on implementing performance tests through design of workloads.
Disciplines: Test
Purpose

The purpose of this task is to:

  • Design the modeled workload
  • Construct workload definitions from modeled workload
Relationships
Steps
Design the Modeled Workload

Break down the overall transaction rates and user scenarios into a set of keystroke and button click level user action sequences.  Each of these user sequences is considered a building block of the workload for a performance test.

These tests are then mapped back onto transaction rates and the various user types (also called user groups).   Each user group has specific rates and frequencies for the various transactions that group is expected to perform during the performance test.

Another part of the design is the identification of where the user scenarios need to input unique values from a list for each individual modeled user and places where the user scenarios need to make random selections to defeat caching features of the system under test.  You then construct the sets of data values and adjust the user input sequences and expected responses in the test programs to handle the variation of data inputs.
Build Workload Elements
Once all of the tests have been validated, a test schedule is created that models the production workload in order to achieve the goals of the performance test.  The workload definition document should include the specifications for think time variation, transaction rates, and the mixture of tests performed by each type of user described.  These specifications result in parameter settings and test sequence design that is captured in the test schedule.
Ensure that Performance Tests are Consistent with Test Architecture
Verify that the designed performance tests are consistent with the test automation architecture.  Review the test architecture to also ensure that the modeled performance tests can be supported by the test environment configuration.
Evaluate and Verify Results

Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".

Have the people performing the downstream tasks that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input work products to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input work product review your work on this basis.

Since RUP is an iterative delivery process, in many cases work products evolve over time. As such, it is not usually necessary-and is often counterproductive-to fully-form a work product that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the work product will change-and the assumptions made when the work product was created proven incorrect-before the work product is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.

More Information