Defect Aging
This metric tracks the length of time a defect spends in each state.
Main Description

Purpose

Defect Aging is a measurement of team capacity, process efficiency, and defect complexity. Teams monitor Defect Aging trends in order to:

  • determine whether defects are being addressed in a timely manner
  • understand the capacity of the team to resolve past defects to help with future planning
  • determine if there is a problem with high severity, critical defects taking too long to fix

Definition

Count the number of days each defect spends in each state (e.g. submitted, assigned, resolved).

Calculate the following:

  • Average number of days defects spend in each state.
  • Maximum number of days defects spend in each state.
  • Minimum number of days defects spend in each state.

Analysis

Use a bar chart to monitor Defect Aging trends. Plot the number of days on the Y axis, and the states on the X axis. Plot a vertical bar for the average, maximum, and minimum days for each state. Use stacked bars to group by severity. Another useful way to monitor defect aging is to monitor how many defects have remained in specific date ranges by severity. Typically teams track under 3 days, 3-5 days, and over 5 days.

Expected trend - The average time in each state is acceptable (meeting the team's established target) and there are no excessive maximum times. The average should not necessarily be the same for all states. For example, a defect should move quickly from Submitted to Assigned if the team's process is working efficiently.

High average in assigned state - This trend occurs when defects spend a long time in the submitted state prior to resolution. This can occur when low priority is placed by the developers on debugging and fixing. Or, when defects are identified late in the process. The older a defect is, the more difficult it may be to correct, since additional code may be created based upon it, and correcting the defect may have larger impact throughout the system. When developers don't fix defects quickly, testers may run into a related defect in another area, creating a duplicate report. Confirm that the team is working to address defects by priority in each iteration.

High average time in submitted state - Defects that are not promptly assigned (according to their priority) indicate a problem with the team's process. Either analysis is taking too long, or the team is not placing high enough priority on reviewing submitted defects. Confirm there is sufficient, consistent process and tooling in place that will alert the team to newly submitted defects and that necessary information is captured with each report for efficient analysis. Confirm the team works to move submissions through the process as quickly as possible.

Confirm that average Defect Aging is acceptable in all states to avoid the risk of impacting the project schedule. This metric can be used to monitor adoption of Iterative Development.

The following figure is an example of a Defect Aging report, which shows days spent in each state.

Defect Aging Report


Frequency and reporting

Data should be harvested for reporting purposes before the end of each iteration. The team should review any defect aging issues during their iteration assessment meetings.


Collection and reporting tools

  IBM® Rational® Quality Manager®,   IBM® Rational® ClearQuest Enterprise®  and IBM® Rational® Team Concert® collect Defect Aging Data. IBM® Rational® Insight® reports on this metric.


Assumptions and prerequisites

  • The team has a tool in place to collect defect data, and has identified states for measurement
  • Defect status updates are collected with date stamps


Pitfalls, advice, and countermeasures

  • Defects may be in some states more than once. Confirm the tool used accounts for this.
  • Priority of defects should be considered when setting target average Defect Aging for each state.