Work Product Descriptor (Artifact): Test Log
This artifact collects the raw output that is captured during a unique run of one or more tests for a single test cycle run.
Purpose
  • To provide verification that a set of tests was run
  • To provide information that relates to the success of those tests
Relationships
Input ToMandatory: Optional:
  • None
External:
  • None
Description
Main DescriptionThis artifact provides a detailed, typically time-based record that both verifies that a set of tests were run, and provides information that relates to the success of those tests. The focus is typically on providing an accurate audit trail, which enables you to undertake a post-run diagnosis of failures. This raw data is subsequently analyzed to determine the results of an aspect of the test effort.
Brief Outline

Each Test Log should be made up of a series of entries that present an audit trail for various aspects of the test execution, including but not limited to the following:

  • The date and time stamp of when the event occurred
  • A description (usually brief) of the event logged
  • Some indication of the observed status
  • Additional contextual information where relevant
  • Additional details relating to any anomalous or erroneous condition detected
Properties
Optional
Planned
Tailoring
Impact of not having

Without this or similar documentation, there is no record of which tests were run, what variances were discovered, and what action was taken. If this information is not available:

  • There is no way to know which tests passed and which failed.
  • There is no way to assess the status of testing and the quality of the product at that level of testing.
  • It is difficult to know how many tests remain outstanding.
  • There can be contractual and legal issues.
Reasons for not needing

When you execute automated tests, test logs are automatically produced. Typically, the issue is not whether to produce the test log, but whether to keep a record, and where to keep the records. For manual testing, the issue is whether to keep a separate test log or to summarize the test results in another form.

Representation Options

Because this is a collection of raw data for subsequent analysis, it can be represented in a number of ways:

  • For manual tests, log the actual results on a copy of the manual Test Script
  • For automated tests, direct the output to log files that you can trace back to the automated Test Script
  • Track raw results data in a test management tool

Automation tools often provide their own Test Log facilities, which can be extended or supplemented with additional logging provided both through custom user-routines and the use of additional tools.

The output may take one single, or many different, forms. Typically, Test Logs have a tabular or spreadsheet-like appearance, with each entry comprising some form of date and time stamp, a description of the event logged, some indication of the observed status, and possibly some additional contextual information.

If you are using automated test tools, such as those found in the IBM® Rational Suite® family of products, much of the above functionality is provided by default with the tool. These Test Log facilities typically provide the ability for the capture, filtering, and sorting (and the analysis of) the information contained in the log. This allows the Test Log to be expanded in detail, or collapsed to a summary view, as required. The tools also offer the ability to customize and retain views of the Test Log for reporting purposes.

Where the logic that produces an automated Test Log simply appends new information to an existing log file, it will be necessary to provide sufficient storage to retain the Test Log file. An alternative solution to this approach is to use a ring buffer. A good explanation of Using Ring Buffer Logging to help find bugs is presented in a pattern catalog by Brian Marick. (Get Adobe reader.) This catalog provides an overview of other classic problems with using automated Test Logs.