Examine the Test Approach, Target Test Items and Assessment Needs
Starting with a review of the Test Plan to determine the assessment needs, consider how the assessment of the extent of
testing and of software quality can be determined using the stated Test Approach. Consider any special needs that need
to be addressed related to specific Target Test Items.
|
Examine the Testability Mechanisms and Supporting Elements
Review the mechanisms that are useful to enable testing in this environment, and identify the specific testability
elements that implement these mechanisms. This includes reviewing resources (such as any function libraries that have
been developed by the test team, and stubs or harnesses implemented by the development team).
Testability is achieved through a combination of developing software that is testable and defining a test approach that
appropriately supports testing. As such, testability is an important aspect of the test teams' asset development, just
as it is an important part of the software development effort. Achieving Testability (the ability to effectively test
the software product) will typically involve a combination of the following:
-
Testability enablers provided by test automation tools
-
Specific techniques to create the component Test Scripts
-
Function libraries that separate and encapsulate complexity from the basic test procedural definition in the Test
Script, providing a central point of control and modification
Does the current Test Suite have the requirement to be distributed? If so, make use of the testability elements that
support distribution. These elements will typically be features of specific automation support tools that will
distribute the Test Suite, execute it remotely, and return the Test Log and other outputs for centralized results
determination.
Does the current Test Suite have the requirement to be run concurrently with other Test Suites? If so, make use of the
testability elements that support concurrency. These elements will typically be a combination of specific supporting
tools and utility functions to enable multiple Test Suites to execute concurrently on different physical machines.
Concurrency requires careful Test Data design and management to ensure that no unexpected or unplanned side effects
occur (such as two concurrent tests updating the same data record).
|
Create the Initial Test Suite Structure
Enumerate one or more Test Suites that (when executed) will provide a complete and meaningful result of value to the
test team, enabling subsequent reporting to stakeholders. Try to find a balance between enough detail to provide
specific information to the project team, but not so much detail that it is overwhelming and unmanageable.
Where Test Scripts already exist, you can probably assemble the Test Suite and its constituent parts yourself, then
pass the Test Suite stabilization work on to a Test Suite implementer to complete.
For Test Suites that require new Test Scripts to be created, you should also give some indication of the Test Scripts
(or other Test Suites) that you believe will be referenced by this Test Suite. If it is easy to enumerate them, do
that. If not, you might simply provide a brief description that outlines the expected content coverage of the main Test
Suite and leave it to the Test Suite implementer to make tactical decisions about exactly what Test Scripts are
included.
|
Adapt the Test Suite Structure to Reflect Team Organization and Tool Constraints
It may be necessary to further subdivide or restructure the Test Suites that you have identified to accommodate the
Work Breakdown Structure (WBS) the team is working to. This will help to reduce the risk that access conflicts might
arise during Test Suite development. Sometimes test automation tools might place constraints on how individuals can
work with automation assets, so restructure the Test Suites to accommodate this as necessary.
|
Identify inter-Test Script Communication Mechanisms
In most cases, Test Suites can simply call Test Scripts in a specific order. This will be sufficient in many cases to
ensure that the correct system state is passed through from one Test Script to the next.
However, in certain classes of system, dynamic runtime data is generated by the system or derived as a result of the
transactions that take place within it. For example, in an order entry and dispatch system, each time an order is
entered a unique order number is system generated. To enable an automated Test Script to dispatch an order, a preceding
order entry Test Script needs to capture the unique number that the system generates, and then pass it on to the order
dispatch Test Script.
In cases like this, you will need to consider what inter-Test Script communication mechanism is appropriate to use.
Typical alternatives include passed parameters, writing and reading values in a disk file, and using global runtime
variables. Each strategy has pros and cons that make it more or less appropriate in each specific situation.
|
Define Initial Dependencies Between Test Suite Elements
This is primarily associated with the sequencing of the Test Scripts (and possibly Test Suites) for runtime execution.
Tests that run without the correct dependencies being established run the risk of either failing or reporting anomalous
data.
|
Visually Model the Test Implementation Architecture
If you have access to a UML modeling or drawing tool, you may wish to create a diagram of the Test Implementation Model
that depicts the key elements of the automated test software. You might also diagram some key aspects of the Test
Automation Architecture in a similar way.
Another approach is to draw these diagrams on a white board that is easily visible to the test team.
|
Refine the Test Suite Structure
As the project progresses, Test Suites are likely to change: new Test Scripts will be added and old Test Scripts
updated, reordered, or deleted. These changes are a natural part of Test Suite maintenance, and you need to embrace
them rather than avoid them.
If you do not actively maintain the Test Suites, they will quickly become broken and fall into disuse. Left for a few
builds, a Test Suite may take extensive effort to resurrect, and it may be easier to simply abandon it and create a new
one from scratch.
|
Maintain Traceability Relationships
Using the Traceability requirements for the project, update the traceability relationships as required. |
Evaluate and Verify Your Results
You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those
team members who will make subsequent use of it as input to their work. Where possible, use checklists to verify that
quality and completeness are good enough.
Have the people who perform the downstream tasks that rely on your work as input review your interim work. Do this
while you still have time available to take action to address their concerns. You should also evaluate your work
against the key input work products to make sure that you have represented them accurately and sufficiently. It may be
useful to have the author of the input work product review your work on this basis.
|
|