Test Coverage of Requirements
This metric tracks the percentage of requirements that are covered by tests, and the percentage that have no associated tests.
Relationships
Related Elements
Main Description

Purpose

This metric exposes requirements that do not have test cases. The ultimate goal of this metric is to ensure that all requirements can be validated and that they work as intended.

Definition

  • Number of test cases per requirement
  • Number of requirements with no associated test cases

Analysis

Software that implements requirements with inadequate test coverage may not work as intended or may have defects. The team should expect 100% coverage of requirements. Verification of coverage requires traceability between requirements and test artifacts.


With some exceptional cases, such as a requirement that can be satisfied by demonstration, a requirement might not need a corresponding test case, but could be verified by other verification method. In this case, the team should make sure that all requirements are verified by checking at the Verification matrix report.

Tracing requirements to test cases can yield a 10% to 30% defect reduction over several years. Not all requirements will have the same number of test cases or amount of test effort applied.

This pie chart shows the percentage of requirements that have and do not have associated test cases.

Test Coverage of Requirements

Frequency and reporting

Monitor Test Coverage of Requirements each iteration. For iterative development, Test Coverage of Requirements should be close to 100% for requirements planned for that iteration.

Collection and reporting tools

Test to Requirements traceability information can be obtained from a Requirements Traceability Matrix. Tools like IBM® Rational® DOORS®, IBM® Rational® Requirements Composer®, IBM® Rational® Requisite Pro®, and IBM® Rational® Team Concert® can be used to collect the data. IBM® Rational® Quality Manager® provides a Requirements Not Covered by Test report.

IBM® Rational® Insight® provides full support for reporting the data.

Pitfalls, advice, and countermeasures

  • When the metric shows 100%  coverage and the team believes that test coverage is good, watch out for the following circumstances: 
    • The quality of the tests themselves may be bad.
    • Most requirements need more than one test case.
    • The requirements may be wrong or misunderstood by the test case developer, so 100% coverage of them is a waste of time.
    • 100% test coverage is the minimum set of tests or only covers incidental cases
    • The team is trying to get 100% coverage, despite knowing they have bad requirements.
  • A 100% test to requirements coverage does not always mean that the requirements are thoroughly tested; the following items are indicators of measurement pitfalls and should be used to corroborate this metric:
    • Defects rejected by the development team with a resolution of "works as intended", which could imply that development team and the verification team have different understanding on the requirements.
    • Defects found after the iteration in which the requirement was tested, the possible reasons are listed in the pitfalls section
    • Change requests vs. requirements
    • Acceptance test failure rate