Roadmap: How to Adopt the Independent Testing Practice
This roadmap describes how to adopt the Independent Testing practice.
Main Description

Testing focuses primarily on evaluating or assessing product quality. These are the main activities:

  • Find and document defects in software quality.
  • Advise on the perceived software quality.
  • Validate and prove the assumptions made in design and requirement specifications through demonstration.
  • Validate that the software product works as designed.
  • Validate that the requirements are implemented appropriately.

Testing is to find and expose weaknesses in the software product. To get the biggest benefit, you need a different philosophy than what's used in the rest of the development cycle. A somewhat subtle difference is that during most of the development the focus is on completeness, whereas testing focuses on incompleteness. A good test effort is driven by questions such as "How could this software 'break,' and in what possible situations could this software fail to work predictably?"

Testing challenges the assumptions, risks, and uncertainty inherent in the work of developing a product and addresses those concerns by using concrete demonstrations and impartial evaluations. You need to avoid two potential extremes:

  • An approach that does not suitably or effectively challenge the software and expose its inherent problems or weaknesses
  • An approach that is inappropriately negative (because you might find it impossible to consider the software product of acceptable quality and could alienate the rest of the development team from the test effort)

Information presented in various surveys and essays states that software testing accounts for 30 to 50 percent of total software development costs. Therefore, it is somewhat surprising that most people believe computer software is not well-tested before it is delivered. This contradiction is rooted in a few key issues:

  • Testing software is very difficult. How do you quantify the different ways in which a given program can behave?
  • Typically, testing is done without a clear methodology, thus creating results that vary from project to project and from organization to organization. Success is primarily a factor of the quality and skills of the individuals.
  • Productivity tools are used insufficiently, which makes the laborious aspects of testing unmanageable. In addition to the lack of automated test execution, many test efforts are conducted without tools that let you effectively manage extensive test data and test results. Flexibility of use and complexity of software make complete testing an impossible goal. Using a well-conceived methodology and state-of-the-art tools can improve both the productivity and effectiveness of software testing.

How to adopt this practice

Here's one possible scenario for adopting this practice. You may want to add, change, or remove steps to design an adoption roadmap more suitable to your environment. Hiring a consultant who is experienced in this area will also speed your adoption of the practice and help avoid common pitfalls.

  1. Educate the team about the Independent Testing practice. Courses and presentations are available (see the Additional Information section in Independent Testing).
  2. Have the team review the material in this practice.
  3. Perform a gap analysis between your current practices and the proposed one. Focus on problem areas. Try to distinguish between real differences and just terminology mismatches.
  4. Identify extension points and extend this practice to reflect any important requirements and constraints in your organization.
  5. Reuse the current elements that reflect your specific environment, such as templates and examples, by attaching them to the proposed practice.
  6. Identify and prepare to collect the information or metrics that will tell you how well you're adopting this practice. Make sure that the data and metrics are easy to collect. Highly accurate metrics that are difficult to collect are often abandoned, thus they provide no value. Coarser measurements that are easy to collect usually provide sufficient information, and it's more likely that they'll continue to be collected.
  7. Develop an adoption plan with specific goals for each step. An iterative, incremental approach works best. Try to tackle the problem points identified earlier.
  8. Select a project where you will start applying the new practice. This pilot project should be sufficiently visible and risky to properly adopt this practice.
  9. Evaluate your adoption based on the objectives and metrics that you defined.
  10. Make adjustments based on your evaluation. Eliminate tools or tool features that don't prove effective, and increase practices that are efficient and improve quality.
  11. Determine the next step in adoption.
  12. Continue to extend or modify this practice to reflect how your team and organization is performing this new process and what the next increment of adoption should be for your team.

How to tailor this practice 

This section presents the most important aspects that you need to consider when you tailor this practice to meet your specific environment.

Decide how to use work products

Make a decision about what work products are to be used and how they are to be used.  It is also important to tailor each work product to be used to fit the needs of the project. 

The table that follows specifies which testing work products are recommended and which are considered optional (can be used in certain cases). For additional tailoring considerations, see the Tailoring section of the work product description page.

Work product Purpose Tailoring (optional, recommended)
Test Evaluation Summary  Summarizes the test results for use primarily by the management team and other stakeholders external to the test team. Recommended for most projects.

Where the project culture is relatively information, it may be appropriate simply to record test results and not create formal evaluation summaries. In other cases, Test Evaluation Summaries can be included as a section within other assessment work products, such as the Iteration Assessment or Review Record. 
Test Findings This work product is the analyzed result determined from the raw data in one or more test logs. Recommended. Most test teams retain some form of reasonably detailed record of the results of testing. Manual testing results are usually recorded directly here and combined with the distilled test logs from automated tests.

In some cases, test teams will go directly from the test logs to producing the Test Evaluation Summary.
Test Log 

The raw data output during test execution, typically produced by automated tests.

Optional.

Many projects that perform automated testing will have some form of test log. Where projects differ is whether the test logs are retained or discarded after the test results have been determined.

You might retain test logs if you need to satisfy certain audit requirements, if you want to analyze how the raw test output data changes over time, or if you are uncertain at the outset of all the analysis that you may be required to provide.

Test Suite Used to group individual related tests (test scripts) together in meaningful subsets. Recommended for most projects.

Also required to define any test script execution sequences that are required for tests to work correctly.
Test Ideas List  This is an enumerated list of tests to consider conducting (often partially formed). Recommended for most projects.

In some cases these lists will be informally defined and discarded after the test scripts or test cases have been defined from them.
Test Strategy Defines the strategic plan for how the test effort will be conducted against one or more aspects of the target system. Recommended for most projects.

A single test strategy per project or per phase within a project is recommended in most cases. Optionally, you might reuse existing strategies where appropriate, or you might further subdivide the test strategies based on the type of testing that you are conducting.
Iteration Test Plan Defines finer-grained testing goals, objectives, motivations, approach, resources, schedule and work products that govern an iteration. Recommended for most projects.

A separate test plan for each iteration is best to define the specific, fine-grained test strategy. Optionally, you can include the test plan as a section within the iteration plan.
Master Test Plan Defines high-level testing goals, objectives, approaches, resources, schedule, and work products that govern a phase or the entire lifecycle. Optional. Useful for most projects.

A Master Test Plan defines the high-level strategy for the test effort over large parts of the software development lifecycle. Optionally, you can include the Test Plan as a section within a Software Development Plan.

Consider whether to maintain a Master Test Plan in addition to the Iteration Test Plans. The Master Test Plan covers mainly logistical and process information that typically relates to the entire project lifecycle; therefore, it is unlikely to change between iterations.
Test Script, Test Data The test scripts and test data are the realization or implementation of the test, where the test script embodies the procedural aspects and the test data embodies the defining characteristics. Recommended for most projects.

Where projects differ is how formally these work products are treated. In some cases, these are informal and transitory, and the test team is judged based on other criteria. In other cases -- especially with automated tests -- the test scripts and associated test data (or some subset thereof) are regarded as major deliverables of the test effort.
Test Case

Defines a specific set of test inputs, execution conditions, and expected results.

Documenting test cases allows them to be reviewed for completeness and correctness, and considered before implementation effort is planned and expended.

This is most useful where the input, execution conditions, and expected results are particularly complex.

On most projects, were the conditions required to conduct a specific test are complex or extensive, it is advisable to define test cases. You will also need to document test cases where they are a contractually required deliverable.

In most other cases, it is more useful to maintain the Test Ideas List and the Implemented Test Scripts List rather than detailed descriptions of test cases.

Some projects will simply outline test cases at a high level and defer details to the test scripts. Another style commonly used is to document the test case information as comments within the test scripts.

Workload Specification

A specialized type of test case. Used to define a representative workload to allow quality risks associated with the system operating under load to be assessed.

Recommended for most systems, especially those where system performance under load must be evaluated or where there are other significant quality risks associated with system operation under load.

Not usually required for systems that will be deployed on a standalone target system.

Testability classes in the Design model

Testability elements in the Implementation model

If the project has to develop significant additional specialized behavior to accommodate and support testing, these concerns are represented by the inclusion of testability classes in the Design model and the testability elements in the Implementation model.

Where required.

Stubs are a common category of test classes and test components.

Test Architecture

Provides an architectural overview of the test automation system by using several different architectural views to depict different aspects of the system.

Optional.

Recommended on projects where the test architecture is relatively complex, when a large number of staff members will be collaborating on building automated tests, or when the test automation system is expected to be maintained over a long period of time.

In some cases, this might simply be a whiteboard diagram that is recorded centrally for interested parties to consult.

Test Interface Specification

Defines a required set of behaviors by a classifier (specifically, a class, subsystem or component) for the purposes of testing (testability). Common types include test access, stubbed behavior, diagnostic logging, and test oracles.

Optional.

On many projects, there is either sufficient accessibility for test in the visible operations on classes, user interfaces etc.

Common reasons to create test interface specifications include UI extensions to allow GUI test tools to interact with the tool and diagnostic message logging routines, especially for batch processes.


Decide how to review work products

  • Defects: The treatment of Defect reviews is very much dependent on context. However, they are generally treated as Informal, Formal-Internal, or Formal-External. This review process is often enforced or at least assisted by workflow management in a defect-tracking system. In general, the level of review formality often relates to the perceived severity or impact of the defect, but factors such as project culture and level of ceremony often have an effect on the choice of review handling methods.

    In some cases, you may need to consider separating the handling of defects (also known as symptoms or failures) from faults: the actual source of the error. For small projects, you can typically manage by tracking only the defects and implicitly handle the faults. However, as the system grows in complexity, it may be beneficial to separate the management of defects from faults. For example, several defects may be caused by the same fault. Therefore, if a fault is fixed, it's necessary to find the reported defects and inform those users who submitted the defects, which is only possible if defects and faults can be identified separately.

  • Test plan and test strategy: In any project where the testing is nontrivial, you will need some form of test plan or strategy. Generally you'll need a test plan for each iteration and some form of governing test strategy. Optionally, you might create and maintain a Master Test Plan. In many cases, these work products are reviewed as Informal; that is, they are reviewed but not formally approved. Where testing visibility is important to stakeholders external to the test team, it should be treated as Formal-Internal or even Formal-External.
  • Test scripts: Test scripts are usually treated as Informal; that is, they are approved by someone within the test team. If the test scripts are to be used by many testers and shared or reused for many different tests, they should be treated as Formal-Internal.
  • Test cases: Test cases are created by the test team and, depending on context, are typically reviewed using either an Informal process or simply not reviewed as all. Where appropriate, test cases might be approved by other team members, in which case they can be treated as Formal-Internal, or they can be reviewed by external stakeholders, in which case they would be Formal-External.

    As a general heuristic, we recommend that you plan to formally review only the test cases that it is necessary to review, which generally will be limited to a small subset that represents the most significant test cases. For example, where a customer wants to validate a product before it is released, a subset of the test cases could be selected as the basis for that validation. These test cases should be treated as Formal-External.

  • Test work products in design and implementation. Testability classes are found in the Design model, and testability elements are in the Implementation model. There are also two other related work products (although not specific to tests): packages in the Design model, and subsystems in the Implementation model.

    These work products are design and implementation work products. However, they're created for the purpose of supporting testing functionality in the software. The natural place to keep them is with the design and implementation work products. Remember to name or otherwise label them in such a way that they are clearly separated from the design and implementation of the core system. Review these work products by following the review procedures for design and implementation work products.

Decide on approval criteria

As you enter each iteration, strive to clearly define up front how the test effort will be judged to have been sufficient, and on what basis that judgment will be measured. Do this by discussion with the individual or group responsible for making the approval decision.

These are examples of ways to handle iteration approval:

  • The project management team approves the iteration and assesses the testing effort by reviewing the test evaluation summaries.
  • The customer approves the iteration by reviewing the test evaluation summaries.
  • The customer approves the iteration based on the results of a demonstration that exercises a certain subset of the total tests. This subset of tests should be defined and agreed beforehand, preferably early in the iteration. These tests are treated as Formal-External and are often referred to as acceptance tests.
  • The customer approves the system quality by conducting their own independent tests, instead. Again, the nature of these tests should be clearly defined and agreed beforehand, preferably early in the iteration. These tests are also treated as Formal-External and are often referred to as acceptance tests.

This is an important decision. You cannot reach a goal if you don't know what it is.