Identify Test Infrastructure Elements
Facilitate common test scenarios
Some tests have a common structure to the scenario or procedure followed when they are executed, but the same procedure
needs to be conducted many times against different test target items. In the case of test automation, it can be useful
to create common test scripts or utility functions that can be reused in many different contexts, in order to undertake
these common test scenarios in an efficient way. This provides a central point of modification if the test scenario
needs to be altered. Examples include conducting standard boundary tests on appropriate classes of interface elements,
and validating UI elements for adherence to UI design standards.
Facilitate test data dependencies
When tests are to be conducted in a given test environment configuration, there is the potential for conflicts in the
test data values that are used. This problem is compounded when the environment is shared by multiple test team
members. Consider using a data-driven approach that uncouples test data values from the test scripts that use them, and
provides a central point of collection and modification of the test data. This provides two key benefits: it gives
visibility of the test data to all test team members (allowing them to avoid potential conflicts in test data use), and
it provides a central point of modification for the test data when it needs to be updated.
Facilitate test state dependencies
Most tests require the system to be in a specific given state before they are executed, and should return the system to
a specific known state when they complete. Common dependencies involve security rights (function or data), dynamic or
context sensitive data (for example, system dates, order numbers, user id preferences, and so on), and data expiration
cycles (for instance, security passwords, product expiration dates, and so on). Some tests are highly dependent on each
other (for example, one test may create a unique order number, and a subsequent test may need to dispatch the same
order number).
A common solution is to use test suites to sequence dependent tests in the correct system state order. The test suites
can then be coupled with appropriate system recovery and setup utilities. For automated test efforts, some solutions
may involve using centralized storage of dynamic system data and the use of variables within the test scripts that
reference the centralized information.
Facilitate derived test data values
Tests sometimes need to calculate or derive appropriate data values from one or more aspects of the runtime system
state. This applies to test data values for both input and expected results. Consider developing utilities that
calculate the derived data values, simplifying test execution and eliminating potential inaccuracies introduced through
human error. Where possible, develop these utilities so that they can be utilized by both manual and automated test
efforts.
Facilitate common test navigation paths
For test automation, you should consider isolating common navigation sequences, and implementing them using centralized
utility functions or test scripts. You can then reuse these common navigation sequences in many places, providing a
central point of modification if the navigation subsequently changes. These common navigation aids simply navigate the
application from one point to another; they typically do not perform any tests themselves, other than to verify their
start and end states.
Identify Test-Specific Design Needs
Identify test interfaces
Consider the interfaces identified: Are there additional requirements the test effort will need included in the
software design, and subsequently exposed in the implementation? In some cases, additional interfaces will be required
specifically to support the test effort, or existing interfaces will require additional operating modes or modified
message signatures (changes to input and return parameters).
In relation to the target deployment environments (as captured in the test environment configurations) and the
development schedule itself, identify the constraints and dependencies placed on the test effort. These dependencies
may necessitate the provision of stubs to simulate elements of the environment that will not be available (or are too
resource prohibitive to establish for testing purposes, or to provide the opportunity for the early testing of
components of the partially completed system).
Identify inbuilt test functions
Some tests are potentially valuable, but prohibitively expensive to implement as true black-box tests. Furthermore, in
high-reliability environments, it is important to be able to test for and isolate faults as quickly as possible to
enable fast resolution. In these cases, it can be useful to build tests directly into the executable software itself.
There are different approaches that you can take to achieve this: Two of the most common include built-in self
tests (where the software uses redundant processing cycles to perform self-integrity tests), and diagnostic
routines (which can be performed when the software is sent a diagnostic event message, or when the system is
configured to run with diagnostic routines enabled).
Identify test design constraints
Some of the design and implementation choices of the development team will either enable or inhibit the test effort.
While some of these choices are unavoidably necessary, there are many smaller decisions (especially in the area of
implementation) that have minimal impact on the development team, but significant impact on the test team.
Areas to consider include:
-
Use of standard, recognized communication protocols
-
Use of UI implementation components that can be recognized by test automation tools
-
Adhering to UI design rules including the naming of UI elements
-
Consistent use of UI navigation conventions
Define Software Testability Requirements
It is important to clearly explain to the development team the reasons why test-specific features are required to be
built into the software. Key reasons will typically fall into one of the following areas:
-
To enable both manual and automated tests to be implemented by providing an interface between the target test item
and either the manual or automated test. This is typically most relevant as a test automation concern, to help
overcome the limitations of test automation tools in being able to access the software application for both
information input and output.
-
To enable built-in self-tests to be conducted by the developed software itself.
-
To enable target test items to be isolated from the rest of the developed system, and then tested.
Test-specific features built into the software need to strike a balance between the value of a built-in test feature
and the effort necessary to implement and test it. Examples of built-in test features include producing audit logs,
self-diagnostic functions, and interfaces to interrogate the value of internal variables.
Another common use of test-specific functionality is during integration work, where there is the need to provide stubs
for components or subsystems that are not yet implemented or incorporated. There are two main implementation styles
used for stubs:
-
Stubs and drivers that are simply "dummies" with no functionality other than being able to provide a specific
predefined value (or values) as either input or as a return value.
-
Stubs and drivers that are more intelligent and can "simulate" or approximate more complex behavior.
This second style of stub also provides a powerful means of isolating components or groups of components from the rest
of the system, thus providing flexibility in the implementation and execution of tests. As with the earlier comment
about test-specific features, a balance between the value of a complex stub and the effort necessary to implement and
test the stub needs to be considered. Use this second style prudently for two reasons: first, it takes more resources
to implement, and second, it is easy to overlook the existence of the stub and forget to subsequently remove it.
Record your findings in terms of test-specific requirements on the design and implementation models directly, or using
one or more test interface specifications.
Define Test Infrastructure
Test automation elements
Key requirements or features of the test automation infrastructure include:
-
Navigation model: Common approaches are round-trip, segmented, or hybrid approaches. Other alternatives
include using an Action-Word framework or screen navigation tables.
-
External Data Access: A method to access data externally from the test instructions
-
Error Reporting and Recovery: Common error-handling routines and Test Suite recovery execution wrappers
-
Security and Access Profiles: Automated Test Execution User IDs
-
The ability for the software to conduct self-tests
Record your decisions as definitions in the implementation sections of the Test Automation Architecture, and your
process guidance in one or more Test Guidelines (or as Test Scripts, Test Suites, or test library utility routines).
See Artifact: Test Automation Architecture for further suggestions.
Manual test elements
Key requirements or features of the manual test infrastructure include:
-
Test Data Repository: A common repository for the definition of test data
-
Restoration and Recovery: A method to restore or recover the test environment configuration to a known state
-
To enable target test items to be isolated from the rest of the developed system and tested.
|