Practice: Performance Testing |
|
 |
This practice describes the main steps of the performance testing process and the most important test aspects that need to be considered when performance is a main concern for the system under development. |
|
|
Purpose
Performance testing is a well-understood discipline that has been practiced
for more than 30 years. First, there were time-sharing capabilities on mainframe
computers, then minicomputers with multiple asynchronous terminals, and later
networks of personal computers connected to server systems. Testing all of these
systems revolved around the need to understand the capacity of the shared portions
of the system.
The process of performance testing has not changed significantly since these
earlier system types were being tested. However, the complexities of the system
design -- with more distributed intelligent hardware components and many more
interconnected software subsystems -- yield more challenges in the analysis
and tuning parts of the process. On current systems, performance testing should
be done iteratively and often during the system design and implementation. Tests
should be performed during implementation of critical subsystems, during their
integration into a complete system, and, finally, under full-capacity workloads
before being deployed into production. |
How to read this practice
The best way to review a practice is to adopt a multi-prong approach:
-
Use different perspectives driven by artifacts, activities, test cycles, or roles. Shift between them when
your focus changes from what you need to produce to how or to when an activity is performed.
-
Start with the performance testing of related Artifacts and decide which ones are important to you and your organization.
-
Analyze the main Performance Testing work pattern, which gives an overview of all of the
activities performed as part of a typical performance testing cycle.
-
Drill down into each activity to better understand the tasks and artifacts employed.
The basic steps in the performance testing process are listed here. Each step is captured in the Performance
Testing Tasks.
-
Determine the system performance questions that you need to answer.
-
Characterize the workload that you want to apply to the system.
-
Identify the important measurements to make within the applied workload.
-
Establish success criteria for the measurements to be taken.
-
Design the modeled workload, including elements of variation.
-
Build the modeled workload elements, and validate each at the various stages of development (single, looped,
parallel, and loaded execution modes).
-
Construct workload definitions for each of the experiment load levels to collect your workload measurements.
-
Run the test and monitor the system activities to make sure that the test is running properly.
-
Analyze measurement results and perform system-tuning activities as necessary to improve the results, and then
repeat test runs as necessary.
-
Publish the analysis of the measurements to answer the established system performance questions.
Also review the guidelines, concepts, and. if applicable, tool-related guidance.
|
Additional Information
Relationships
Content References |
|
Inputs |
|
Licensed Materials - Property of IBM
© Copyright IBM Corp. 1987, 2012. All Rights Reserved.
|
|