 |
This task adds the implementation details for each measure of a multi-tiered measurement system. |
|
Purpose
To identify the data elements, calculations, and display formats for each derived measure. |
Relationships
Roles | Primary:
| Additional:
| Assisting:
|
Inputs | Mandatory:
| Optional:
| External:
|
Outputs |
|
Main Description
To be able to reason about a measure and understand its usage, very specific details for the measure are needed.
Specifically, detailing involves answering the following questions:
-
Who is involved in the collection and usage
-
What is collected (ex. Defect Count, Actual Cost, Function Points, User Stories)
-
How is the collection to occur (manual or automatic)
-
When will it be collected (ex. daily, weekly, monthly, quarterly)
-
How are the results of the collection to be displayed (ex. trend chart, distribution chart, tabular)
-
When will the derived measure be displayed
|
Steps
Detail each Measure
Details about the data elements and associated manipulations are documented in the Measurement Specification. A
Measurement Specification needs only to be created when the details of a measure are not supported by automated
data collection. In other words, if the measure is collected manually or not currently detailed in an automated
collection tool, the actual attributes used in the base measure must be documented in the Measurement
Specification. See Measures, Metrics and Attributes for definitions and clarity of use.
As an example of an operational measure, Cost Variance Percentage, the base measures needing collected are
Budget at Completion and Actual Cost. To determine the percentage of the Cost Variance the calculation is: Cost
Variance = (Budget at Completion - the Actual Cost) / Budget at Completion. The step of detailing the measure for Cost
Variance Percentage in the artifact, Measurement Specification would mean documenting:
-
The attributes used in the measure (dollars)
-
The base measures used in the derived measure (Budget at Completion, Actual Cost)
-
The derived measure used CV=(BC-AC)/BC
The following items should also be considered:
-
The timing needed for each practice measure collection
-
The relationship for each practice, and practice measure to operational measures
-
The relationship for each operational measure to business measure
-
How each metric in the multi-tiered system is evaluated
-
The format to be used with each metric display
|
Define Data Interpretation
The data of a measure can be either subjective or objective. Either way, the use of the data must be documented in
order to properly reason about the information. Besides knowing what a collected value is, reasoning about it must also
include the context in which it was collected. Interpretation for a measure includes: timing, purpose, and
relationships.
At what time within the lifecycle of a project a measure was recorded must be kept in mind or else erroneous decision
making occurs. For example, reporting on the trend associated with Iteration Velocity may indicate a much lower than
desired release of functionality early in a project’s lifecycle. However, good engineering practice stresses that the
functionality associated with the higher risk or the most difficult, risky functionality, be implemented first.
During this portion of the lifecycle progress may be slower simply because of the complexity or difficulty of the
effort while later in the lifecycle when less risky or “easier” functionality is being implemented the desired rate
would naturally be higher. Therefore, the timing and usage for a particular measure must be understood and documented
in order to be properly interpreted.
The purpose of the collection for a measure must also be documented so the interpretation of it is appropriately
constrained. For example, using a trend chart for Iteration Velocity should not be used to tell you that resources are
appropriately applied. The resources may or may not be appropriately applied. Using a trend chart of Iteration Velocity
for evaluation of resource allocations is subject to any interpretation the viewer may want to come away with.
Relationships to other measures need to be identified. It is through the relationships that exist between measures that
an organization can trace the impact of a change at one level of the organization to another. For example, an
operational objective for an IT organization might be to improve on the delivery of what their customers are asking.
Establishing a relationship between the more tactical measure of Iteration Velocity and the more long term, annual
measure of improving Customer Satisfaction provides the ability to trace the impact of change. Improving Iteration
Velocity, along with other defined control metrics, would provide metric values giving confidence that Customer
Satisfaction will show improvement when that measure is collected.
|
Detail the Measurement Infrastructure
After defining the measurement infrastructure, the definitions need to be implemented in a tool set. The tool set consists
of any software or hardware used in the collection, manipulation, storage, and reporting of the measure. For non-automated
collections, this would include creating the calculations in a spreadsheet to use with the entered data values. |
Detail the Measurement Procedures
Detailing the measurement process involves specifying:
-
Organizational:
-
Individuals to fill the roles responsible for configuring the measurement system
-
Individuals filling the roles responsible for collecting and reporting the implemented metrics
-
Individuals to maintain the system
-
Detailed work items supplied to the deployment project manager for inclusion in the deployment project plan.
-
The change process, (enhancements and defects) for the performance measurement systems, the details for submitting
a request, evaluating a request, implementing an approved request, or rejecting a request
-
The procedures of the Performance Measurement System including:
-
Getting started procedures
-
Project procedures
-
Program / Portfolio procedures
-
Enterprise Procedures
-
Maintance Procedures
-
The actual timing of the collection for each measure (ex. Wednesday, December 21, 2012 at 11 pm)
|
|
Properties
Predecessor |
|
Multiple Occurrences |  |
Event Driven |  |
Ongoing |  |
Optional |  |
Planned |  |
Repeatable |  |
More Information
© Copyright IBM Corp. 1987, 2009. All Rights Reserved.
|
|