Examine all Test Incidents and Failures
The Test Logs are analyzed to determine the meaningful Test Results, regarding the differences between the expected
results and the actual results of each test. Identify and analyze each incident and failure in turn. Learn as much as
you can about each occurrence.
Check for duplicate incidents, common symptoms and other relationships between incidents. These conditions often
provide valuable insight into the root cause of a group of the incidents.
|
Create and Maintain Change Requests
Differences indicate potential defects in the Target Test Items and should be entered into a tracking system as
incidents or Change Requests, with an indication of the appropriate corrective actions that could be taken.
For more details, see: Guideline: Creating and Maintaining Change Requests.
|
Analyze and Evaluate Status
Make an Assessment of the Current Quality Experience
Formulate a summary of the current quality experience, highlighting both good and bad aspects of the software products
quality.
|
Make an Assessment of Outstanding Quality Risks
Identify and explain those areas that have not yet been addressed in terms of quality risks and indicate what impact
and exposure this leaves the team.
Provide an indication of what priority you consider each outstanding quality risk to have, and use the priority to
indicate the order in which these issues should be addressed.
|
Make an Assessment of Test Coverage
Based on the work in the Test Execution Coverage (Analyze and Evaluate Status step), provide a brief summary statement of
the status and information the data represents. |
Draft the Test Evaluation Summary
Present the Test Results for this Test Cycle in a Test Evaluation Summary. This step is to develop the initial draft of
the summary. This is accomplished by assembling the previous information that has been gathered into a readable summary
report. Depending on the stakeholder audience and project context, the actual format and content of the summary will
differ.
Often it is a good idea to distribute the initial draft to a subset of stakeholders to obtain feedback that you can
incorporate before publishing to a broader audience.
|
Advise Stakeholders of Key Findings
Using whatever means is appropriate, publicize this information. We recommend you consider posting these on a
centralized project site, or present them in regularly held status meetings to enable feedback to be gathered and next
actions to be determined.
Be aware that making evaluation summaries publicly available can sometimes be a sensitive political issue. Negotiate
with the development manager to present results in such a manner that they reflect an honest and accurate summary of
your findings, yet respect the work of the developers.
|
Evaluate and Verify Your Results
You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those
team members who will make subsequent use of it as input to their work. Where possible, use checklists to verify that
quality and completeness are "good enough".
Have the people performing the downstream tasks that rely on your work as input take part in reviewing your interim
work. Do this while you still have time available to take action to address their concerns. You should also evaluate
your work against the key input work products to make sure you have represented them accurately and sufficiently. It
may be useful to have the author of the input work product review your work on this basis.
|
|