Iteration Velocity
This metric tracks the rate at which a team completes work across iterations, thereby helping the team improve project predictability.
Main Description

Purpose

The Iteration Velocity metric is used to measure the capability of the team. It helps identify the trend of how much work a team can complete in an iteration by tracking how much work it has successfully delivered in past iterations.

Velocity can be used to predict whether the available resources will be able to complete planned tasks within the given iteration or release. Using a team's average velocity and available resources, you can plot a trend line to determine when the remaining work will be completed. Factor this in whenever something is added to the scope in the middle of an iteration.

Definition

Velocity is typically measured in story points or ideal days per iteration. If a team delivered 30 story points in their last iteration, their velocity is 30.

Velocity = Number of units of work that the team has completed during a given iteration. Units can be in points*, ideal days, days, hours or any unit that the team uses for estimation.

* Points are units of relative size used in estimating tasks. These can be use case points, feature points, user story points, stakeholder request points, or Hi-Med-Low complexity. The higher the "points" the more functionality each work item is worth.

A good way to monitor Iteration Velocity over time is to use a line or bar chart. The number of points is plotted on the Y axis, and the iterations are on the X axis.

Analysis

Using Iteration Velocity to monitor project execution

Iteration Velocity is used as a project execution metric to help a team monitor and steer their project performance. As velocity is tracked, trends are identified to help the team predict when they can complete the remaining work. If velocity isn't high enough or stable enough to meet release commitments, the team can take appropriate corrective action.

Expected trend - Ideally, velocity should accelerate over time as the team begins to work better together and then stabilizes. Velocity can be affected by many factors, such as changes to the team, new tool sets, or unexpected difficulty in implementing a feature. Barring these types of obstacles, a stable team with required resources will typically increase their velocity over time, then plateau after three to six iterations. If velocity goes up or down very quickly there could be a cause for concern to investigate.

Downward slope - A trend line that goes down in later iterations is likely due to impending transition, and could be no cause for concern. However, if the line drops very quickly it could indicate a problem that is impacting the team's productivity. One possible cause could be quality issues. If the team is spending an increasing amount of time fixing defects, velocity will drop. The team needs to address the root causes of the poor quality. For example, the team might need to increase unit tests, bring in an experienced resource to help them with any new technologies, or introduce pair programming. This trend can be an indicator of a codebase that is not maintainable.

There are other factors that can contribute to a drop in velocity. Have people left the team? If so, the velocity of the team will drop and re-scoping might be needed. Has there been a change in communication channels or process that is impacting the team's ability to complete their work? In these cases, the team might just need a period of time to adjust to the changes. Otherwise, whatever has changed might need to be reassessed to avoid the need to rescope the project.

Flat - A flat trend line indicates that a team is not increasing their productivity as the project progresses. Ideally, teams introduce incremental improvements that result in an upward sloping velocity trend line. But in some cases, factors such as increasing unknowns in the project, or large refactoring efforts cancel out those incremental improvements. This results in velocity that does not change. Take action to address areas of uncertainty in the project that are blocking the team from better progress. Confirm that the team morale is high and that motivation is not an issue.

Up and down - A trend line that oscillates up and down, but with no major up or down trends could indicate that uncompleted work is often carried forward to the next iteration. Because points are not credited for an iteration unless they are successfully delivered and accepted, some iterations have work credited as completed that was begun in an earlier iteration. At times, this is due to stakeholder reviews taking too long to complete. Confirm the review process with stakeholders and emphasize the importance of timely reviews. Otherwise, the team might be underestimating work. Review estimates in iteration retrospective meetings to determine the cause of the variance. Confirm that the team is breaking work items down into chunks that are not too large and that they are not estimating too far out in the future.

The graph that follows shows a velocity trend in units of points. In initial iterations, the team's velocity is little low, but it increases and stabilizes as the team proceeds toward the middle iterations.

Iteration Velocity

Using Iteration Velocity to monitor capability improvement

Iteration Velocity is also used as a capability improvement metric metric. It helps a team and middle management (project manager, product owner) monitor improvements made during the project lifecycle in adopting the Iterative Development and Release Planning practices. If teams are adopting these practices by timeboxing their iterations, delivering working software in each iteration, assessing each iteration and making adjustments based on stakeholder feedback, it will be reflected in their velocity trends. Operational Executives can also use this metric to monitor systematic improvement in reducing Time to Value across the organization by adopting these practices.

Expected trend - Teams adopting Iterative Development and Release Planning will typically accelerate in velocity over time as the team begins to work better together and understand how to create their solution in increments. After a few iterations a team's velocity will stabilize.

Velocity at or near zero in early iterations - If velocity is extremely low in early iterations, it could indicate that teams are not successfully adopting iterative development. This trend is seen when teams aren't doing any actual development (delivering working software) early in the lifecycle. Instead, they are performing in more of a waterfall fashion, focusing on requirements and design only. Provide a coach or mentor to help teams that are new to iterative development or that are struggling. Encourage retrospective reviews at the end of each iteration to help teams evaluate their adoption of iterative development and release planning best practices.

Up and Down - If teams routinely display an Iteration Velocity trend line that oscillates up and down, they may not be timeboxing their iterations consistently. Re-enforce the construct and benefits of time-boxed iteration planning and execution. Or, teams may not be receiving timely feedback from stakeholders allowing them to mark work items as accepted (done) in the iteration in which the work was performed. Involve stakeholders early in the planning cycle, secure key stakeholder commitments during project schedule review, and communicate the benefits of early feedback.

Frequency and reporting

Team members post activity daily in an easily accessible location (common tool, dashboard, teamroom, etc). This includes work items added and removed, and status updates to work items. The resulting chart can be shown by a team lead or scrum master during each daily meeting to indicate progress in the iteration, and is reviewed at the end of each iteration to help identify trends.

Collection and reporting tools

Iteration Velocity is captured in IBM® Rational® Team Concert®. IBM® Rational® Insight® reports on this metric.

Assumptions and prerequisites

  • Iterations are scheduled and each team member's role within them established.
  • The team has a prioritized backlog of work items that is updated daily
  • User stories or use cases are elaborated.
  • Iteration review meetings take place at the end of each iteration.
  • The team delivers working software every iteration.

Pitfalls, advice, and countermeasures

  • When calculating velocity, count only closed work items. Do not count work items that the team partially completed during the iteration.
  • Estimate and prioritize defect fixes, but do not count them toward the team's velocity
  • Chart team velocity during the whole iteration and for every iteration.
  • Do not try to predict trends in velocity after only one or two iterations.
  • The number of actual hours spent completing a task or a story have no bearing on velocity.
  • Post big, visible charts in common areas where everyone can see them.
  • A cumulative chart is useful, because it shows the total number of work items completed through the end of each iteration.
  • Do not compare velocity of one team to another. Each team will estimate their work differently, and teams might inflate their estimates in order to increase their reported velocity if they know they are being compared to other teams.
  • Track velocity from the beginning of the project, although trends are not as useful in early iterations. If the project has a long duration, and the team, resources, and technology are all stable, the team may decide to suspend collecting velocity until circumstances change.
  • When a team's velocity trend looks very good, use a Backlog Complexity metric as a countermeasure to confirm that they are not waiting to address high risk items too late in the lifecycle. Backlog Complexity should be dropping, but will remain high if the team has a habit of addressing high risk items late in the lifecycle.