Task: Evaluate Performance Test
This task describes how to evaluate the results of a performance test.
Relationships
InputsMandatory: Optional: External:
  • None
Outputs
Steps
Analyze Measurement Results

The test result includes documentation of the tests and test schedules that represented the workload used in measuring the system performance results.  Also the result includes the system performance measurements especially the transaction rates and page response times as well as the functional validation of getting proper system operation during the workload.

You must also capture the system under test configuration and environmental settings as well as any documentation or programs used to reset the system state at the beginning of the test run. 

Finally, you should analyze the system resource measurements for both the load producing systems (the drivers) as well as the system under test.  This should include CPU utilization, memory size information, paging and swapping rates, disk and network I/O utilizations as well as any application server monitoring data. 

This data should be used to confirm that the test interval maintained a steady state for long enough to achieve the needed response time and transaction rate test measurements.  Based on these measurements and the nature of the performance testing goals set out at the beginning of the test, you should be able to draw conclusions about whether the system under test can achieve the expected performance and make recommendations about the next steps to be done in the development or deployment of the system.

Publish Test Results
Create reports to reflect your performance test run.  Ensure that the results are reviewed by the development team. 
Evaluate and Verify Results

Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".

Have the people performing the downstream tasks that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input work products to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input work product review your work on this basis.

Since RUP is an iterative delivery process, in many cases work products evolve over time. As such, it is not usually necessary-and is often counterproductive-to fully-form a work product that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the work product will change-and the assumptions made when the work product was created proven incorrect-before the work product is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.

Properties
Predecessor
Multiple Occurrences
Event Driven
Ongoing
Optional
Planned
Repeatable
More Information