Task: Run Tests
This task describes how to run tests required to evaluate product quality, and how to capture test results that facilitate ongoing assessment of the product.
Relationships
RolesPrimary: Additional: Assisting:
InputsMandatory: Optional: External:
  • None
Outputs
Affected Work Items
Work Item Types
Steps
Setup Test Environment to Known State

Set up the test environment to ensure that all of the required components (hardware, software, tools, data, and so on) have been established, and are available and ready in the test environment in the correct state to enable the tests to be conducted. Typically, this will involve some form of basic environment reset (for example, Registry and other configuration files), restoration of underlying databases to the require state, and the setup of any peripheral devices (such as loading paper into printers). While some tasks can be performed automatically, some aspects typically require human attention.

The use of environment support tools (such as those that enable hard-disk image capture and restoration) is extremely valuable in managing this effort effectively.

Set Execution Tool Options

Set the execution options of the supporting tools. Depending on the sophistication of the tool, there may be many options to consider. Failing to set these options appropriately may reduce the usefulness and value of the resulting Test Logs and other outputs. Where possible, you should try to store these tool options and settings, so that they can be reloaded easily based on one or more predetermined profiles. In the case of automated test execution tools, there may be many different settings to be considered, such as the speed at which execution should be performed.

In the case of manual testing, it is often simply a matter of logging into issue or change request tracking systems, or partitioning a new unique entry in a support system for logging results. You should give some thought to concerns such as the name, location, and state of the Test Log to be written to.

Schedule Test Suite Execution

In many cases where test execution can be attended, the Test Suite can be executed relatively on demand. In these cases, scheduling will likely need to take into account considerations such as the work of other testers, other team members, as well as different test teams that share the test environment. In these cases, test execution will typically need to work around infrequent environment resets.

However, in cases where unattended execution of automated tests is desired, or where the execution of many tests running concurrently on different machines must be coordinated, some form of automated scheduling mechanism may be required. Either use the features of your automated test execution tool, or develop your own utility functions to enable the required scheduling.

Execute Test Suite

Executing the Test Suite will vary depending upon whether testing is conducted automatically or manually. In either case, the test suites developed during the test implementation tasks are used to either execute the tests automatically, or guide the manual execution of the tests.

Evaluate Execution of Test Suite

The execution of testing ends or terminates in one of two conditions:

  • Normal: All of the Tests execute as intended to completion.
  • Abnormal or premature: The Tests did not execute completely as intended. When testing ends abnormally, the Test Logs from which subsequent Test Results are derived may be unreliable. The cause of the abnormal termination needs to be identified and, if necessary, the fault corrected and the tests re-executed.
Recover from Halted Tests

To recover from halted tests, do the following:

  • Inspect the Test Logs and other output
  • Correct errors
  • Schedule and execute the Test Suite again
  • Reevaluate the execution of the Test Suite

For more details, see Guideline: Recovering from Halting Tests.

Inspect the Test Logs for Completeness and Accuracy

When test execution initially completes, you should review the Test Logsto ensure that the logs are reliable, and that reported failures, warnings, or unexpected results were not caused by external (to the target-of-test) influences (such as improper environment setup or invalid input data for the test).

For GUI-driven automated Tests, common Test failures include:

  • Test verification failures: This occurs when the actual result and the expected result do not match. Verify that the verification method(s) used focus only on the essential items or properties, and modify if necessary.
  • Unexpected GUI windows: This occurs for several reasons. The most common is when a GUI window other than the expected one is active, or the number of displayed GUI windows is greater than expected. Ensure that the test environment has been set up and initialized as intended for proper test execution.
  • Missing GUI windows: This failure is noted when a GUI window is expected to be available (but not necessarily active) and is not. Ensure that the test environment has been set up and initialized as intended for proper test execution. Verify that the actual missing windows are or were removed from the target-of-test.

If the reported failures are due to errors identified in the test work products, or due to problems with the test environment, the appropriate corrective action should be taken and the testing re-executed.

If the Test Log enables you to determine that the failures are due to genuine failures in the Target Test Items, then the execution portion of the task is complete.

Restore Test Environment to Known State

(See the first step) Next, you should restore the environment back to its original state. Typically, this will involve some form of basic environment reset (for example, Registry and other configuration files), restoration of underlying databases to a known state, and so on, in addition to tasks such as loading paper into printers. While some tasks can be performed automatically, some aspects typically require human attention.

Maintain Traceability Relationships

Using the Traceability requirements for your project, update the traceability relationships as required. A good starting point is to consider traceability in terms of measuring the extent of testing or test coverage. As a general rule, base the measurement of the extent of testing against the motivators that you discovered during the test planning activities.

Test Suites might also be traced to the defined Test Cases that they realize. They may also be traced to elements of the requirements, software specification, design, or implementation.

Whatever relationships you have decided are important to trace, you will need to update the status of the relationships that were established during implementation of the Test Suite.

Evaluate and Verify Your Results

You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use checklists to verify that quality and completeness are good enough.

Have the people who perform the downstream tasks that rely on your work as input review your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input work products to make sure that you have represented them accurately and sufficiently. It may be useful to have the author of the input work product review your work on this basis.

Properties
Predecessor
Multiple Occurrences
Event Driven
Ongoing
Optional
Planned
Repeatable
More Information