Practice: Performance Testing
This practice describes the main steps of the performance testing process and the most important test aspects that need to be considered when performance is a main concern for the system under development.
Purpose

Performance testing is a well-understood discipline that has been practiced for more than 30 years. First, there were time-sharing capabilities on mainframe computers, then minicomputers with multiple asynchronous terminals, and later networks of personal computers connected to server systems. Testing all of these systems revolved around the need to understand the capacity of the shared portions of the system.

The process of performance testing has not changed significantly since these earlier system types were being tested. However, the complexities of the system design -- with more distributed intelligent hardware components and many more interconnected software subsystems -- yield more challenges in the analysis and tuning parts of the process. On current systems, performance testing should be done iteratively and often during the system design and implementation. Tests should be performed during implementation of critical subsystems, during their integration into a complete system, and, finally, under full-capacity workloads before being deployed into production.

How to read this practice

The best way to review a practice is to adopt a multi-prong approach:

  • Use different perspectives driven by artifacts, activities, test cycles, or roles. Shift between them when your focus changes from what you need to produce to how or to when an activity is performed.
  • Start with the performance testing of related Artifacts and decide which ones are important to you and your organization.
  • Analyze the main Performance Testing work pattern, which gives an overview of all of the activities performed as part of a typical performance testing cycle.
  • Drill down into each activity to better understand the tasks and artifacts employed.

The basic steps in the performance testing process are listed here. Each step is captured in the Performance Testing Tasks

  1. Determine the system performance questions that you need to answer.
  2. Characterize the workload that you want to apply to the system.
  3. Identify the important measurements to make within the applied workload.
  4. Establish success criteria for the measurements to be taken.
  5. Design the modeled workload, including elements of variation.
  6. Build the modeled workload elements, and validate each at the various stages of development (single, looped, parallel, and loaded execution modes).
  7. Construct workload definitions for each of the experiment load levels to collect your workload measurements.
  8. Run the test and monitor the system activities to make sure that the test is running properly.
  9. Analyze measurement results and perform system-tuning activities as necessary to improve the results, and then repeat test runs as necessary.
  10. Publish the analysis of the measurements to answer the established system performance questions.

Also review the guidelines, concepts, and. if applicable, tool-related guidance.

Additional Information
For more information on this practice,  see the practice resource page on IBM® DeveloperWorks®.
Relationships