Task: Identify Targets of Test
This task describes how to identify the individual system elements, both hardware and software, that need to be tested.
Relationships
RolesPrimary: Additional: Assisting:
InputsMandatory: Optional: External:
  • None
Outputs
Steps
Determine What Software Will Be Implemented

Using the Iteration Plan and other available sources, identify the individual software items that the development team plan to produce for the forthcoming Iteration. Where the development effort is distributed to teams in various locations, you may need to discuss the development plans with each team directly. Check to see whether any development is subcontracted, and use whatever channels are available to you to gather details of the subcontractor's development effort.

As well as new software, also note changes to infrastructure and shared components. These changes may effect other dependent or associated software elements produced in previous development cycles, making it necessary to test the effect of these changes on those elements. For similar reasons, you should identify any changes and additions to third-party components that the development effort intends to make use of. This includes shared components, base or common code libraries, GUI widgets, persistence utilities, and so on. Review the software architecture to determine what mechanism are in use that may be effected by third-party component changes.

Identify Candidate System Elements to Be Tested

For each identified test motivator, examine the list of software items to delivered as part of this development cycle. Make an initial list that excludes any items that cannot be justified as useful in terms of satisfying the test motivators. Remember to include commercially-available software, as well as that to be developed directly by the project development team.

You will also need to consider what impact the various target deployment environments will have on the elements to be tested. Your list of candidate system elements should be expanded to include both the software being developed, and the candidate elements of the target environment. These elements will include hardware devices, device-driver software, operating systems, network and communications software, and third-party base software components (for example, eMail client software, Internet browsers, and so on). They also include various configurations and settings related to the possible combinations of all of these elements.

Where you have identified important target deployment environments, you should consider recording this information by creating or updating one or more outlined Test Environment Configurations. This outline should provide a name and brief description, and enumerate the main requirements or features of the configuration. Avoid spending a lot of time on these outlines, because the list of requirements and features will be subsequently detailed in Task: Structure Test Environment Configuration.

Refine the Candidate List of Target Items

Using the evaluation mission and scope of the test effort agreed upon with the evaluation stakeholders, examine the list of target items and identify items that do not satisfy the evaluation mission (or are obviously out of the test effort scope).

As an opposing check, critically examine the items again, and challenge whether the evaluation mission and test effort scope will really be satisfied by the refined list of target items. It may be necessary to add additional elements to the list of target items to ensure appropriate scope and the ability to achieve the evaluation mission.

Define the List of Target Items

Now that you have decided on the target test items, you need to communicate your choices to the test team and other stakeholders in the test effort. Arguably the most common method is to document the decisions about the target items in the Iteration Test Plan.

An alternative is to simply record this information in some form of table or spreadsheet and make use of it to govern work and responsibility assignment. During test implementation and execution, individual testers will make use of this information to make tactical decisions regarding the specific tests to implement, and what test results to capture in relation to these target items.

Evaluate and Verify Your Results

You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use checklists to verify that quality and completeness are good enough.

Have the people who perform the downstream tasks that rely on your work as input review your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input work products to make sure that you have represented them accurately and sufficiently. It may be useful to have the author of the input work product review your work on this basis.

Properties
Multiple Occurrences
Event Driven
Ongoing
Optional
Planned
Repeatable