Guideline: Implementing Tests
This guideline provides more details on some aspects of implementing tests.
Main Description

Select Appropriate Implementation Technique

Manual Test Scripts

Many tests are best conducted manually, and you should avoid the trap of attempting to inappropriately automate tests. Usability tests are an area where manual testing is in many cases a better solution than an automated one. Also, tests that require validation of the accuracy and quality of the physical outputs from a software system generally require manual validation. As a general heuristic, it is a good idea to begin the first tests of a particular Target Test Item with a manual implementation: this approach allows the tester to learn about the target item, adapt to unexpected behavior from it, and apply human judgment to determine the next appropriate action to be taken.

Sometimes, manually conducted tests will be subsequently automated and reused as part of a regression testing strategy. Note, however, that it is not necessary or desirable (or even possible) to automate every test that you could otherwise conduct manually. Automation brings certain advantages in speed and accuracy of test execution, visibility and collation of detailed test results, and in efficiency of creating and maintaining complex tests. However, like all useful tools, it is not the solution to all of your needs.

Automation comes with certain disadvantages: these basically amount to an absence of human judgment and reasoning during test execution. The automation solutions currently available simply do not have the cognitive abilities that a human does, and it is arguably unlikely that they ever will. During implementation of a manual test, human reasoning can be applied to the observed responses of the system to stimulus. Current automated test techniques and their supporting tools typically have limited ability to notice the implications of certain system behaviors, and have minimal ability to infer possible problems through deductive reasoning.

Programmed Test Scripts

This is arguably the method of choice practiced by most testers who use test automation. In its purest form, this practice is performed in the same manner (and using the same general principles) as software programming. As such, most methods and tools used for software programming are generally applicable and useful to test automation programming.

Using either a standard software development environment (such as Microsoft Visual Studio or IBM Visual Age) or a specialized test automation development environment (such as the IDE provided with IBM Rational Robot), the tester is free to harness the features and power of the development environment to best effect.

The negative aspects of programming automated tests are related to the negative aspects of programming itself as a general technique. For programming to be effective, some consideration should be given to appropriate design: without this the implementation will likely fail. If the developed software will likely be modified by different people over time (the usual situation), then some consideration must be given to adopting a common style and form to be used in program development, and ensuring its correct use. Arguably the two most important concerns relate to the misuse of this technique.

First, there is a risk that a tester will become engrossed in the features of the programming environment, and spend too much time crafting elegant and sophisticated solutions to problems that could be achieved by simpler means. The result is that the tester wastes precious time on what are essentially programming tasks, to the detriment of time that could be spent actually testing and evaluating the Target Test Items. It requires both discipline and experience to avoid this pitfall.

Secondly, there is the risk that the program code used to implement the test will itself have bugs introduced through human error or omission. Some of these bugs will be easy to debug and correct in the natural course of implementing the automated test: others will not. Just as errors can be elusive to detect in the Target Test Item, it can be equally difficult to detect errors in test automation software. Furthermore, errors may be introduced where algorithms used in the automated test implementation are based on the same faulty algorithms used by the software implementation itself. This results in errors going undetected, hidden by the false security of automated tests that apparently execute successfully. Mitigate this risk by using different algorithms in the automated tests wherever possible.

Recorded or captured Test Scripts

There are a number of test automation tools that provide the ability to record or capture human interaction with a software application and produce a basic Test Script. There are a number of different tool solutions for this. Most tools produce a Test Script implemented in some form of a high-level, normally editable, programming language. The most common designs work in one of the following ways:

  • Capturing the interaction with the client UI of an application based on intercepting the inputs sent from the client hardware peripheral input devices (mouse, keyboard, and so forth) to the client operating system. In some solutions, this is done by intercepting high-level messages exchanged between the operating system and the device driver that describe the interactions in a somewhat meaningful way; in other solutions, this is done by capturing low-level messages, often based at the level of time-based movements in mouse coordinates or key-up and key-down events.
  • Intercepting the messages sent and received across the network between the client application and one or more server applications. The successful interpretation of those messages relies typically on the use of standard, recognized messaging protocols, such as HTTP, SQL, and so forth. Some tools also allow the capture of base communications protocols such as TCP/IP, however it can be more complex to work with Test Scripts of this nature.

While these techniques are generally useful to include as part of your approach to automated testing, some practitioners feel that these techniques have limitations. One of the main concerns is that some tools simply capture application interaction and do nothing else. Without the additional inclusion of observation points that capture and compare system state during subsequent script execution, the basic Test Script cannot be considered to be a fully-formed test. Where this is the case, the initial recording will need to be subsequently augmented with additional custom program code to implement observation points within the Test Script.

Various authors have published books and essays on this and other concerns related to using test procedure record or capture as a test automation technique. To gain a more in-depth understanding of these issues, review the work available on the Internet by the following authors: James Bach, Cem Kaner, Brian Marick and Bret Pettichord, and the relevant content in the book Lessons Learned in Software Testing [KAN01].

Generated Tests

Some of the more sophisticated test automation software enables the actual generation of various aspects of the test (either the procedural aspects or the Test Data aspects of the Test Script) based on generation algorithms. This type of automation can play a useful part in your test effort, but should not be considered a sufficient approach by itself. The IBM® Rational® TestFactory® tool and the IBM® Rational® TestManager datapool generation feature are example implementations of this type of technology.

Set Up Test Environment Preconditions

Manual walk-through of the test

Especially applicable to automated Test Scripts, it can be beneficial to initially walk-through the test manually to confirm that expected prerequisites are present. During the walk-through, you should verify the integrity of the environment, the software, and the test design. The walk-through is most relevant where you are using an interactive recording technique, and least relevant where you are programming the Test Script. Your objective is to verify that all of the elements required to implement the test successfully are present.

Where the software is known to be sufficiently stable or mature, you way elect to skip this step (you deem the risk of problems occurring in the areas the manual walk-through addresses are relatively low).

Identify and confirm appropriateness of Test Oracles

Confirm that the test oracles that you plan to use are appropriate. Where they have not already been identified, now is the time for you to do so.

You should try to confirm through alternative means that the chosen Test Oracle(s) will provide accurate and reliable results. For example, if you plan to validate test results using a field displayed via the application's UI that indicates a database update has occurred, consider independently querying the back-end database to verify the state of the corresponding records in the database. Alternatively, you might ignore the results presented in an update confirmation dialog, and instead confirm the update by querying for the record through an alternative front-end function or operation.

Reset test environment and tools

Next, you should restore the environment (including the supporting tools) back to it is original state. As mentioned in previous steps, this will typically involve some form of basic operating environment reset, restoration of underlying databases to a known state, and so on, in addition to tasks such as loading paper into printers. While some reset tasks can be performed automatically, some aspects typically require human attention.

Set the implementation options of the test-support tools, which will vary depending on the sophistication of the tool. Where possible, you should consider storing the option settings for each tool so that they can be reloaded easily based on one or more predetermined profiles. In the case of manual testing, it will include tasks such as partitioning a new entry in a support system for logging the test results, or signing into an issue and change request logging system.

In the case of automated test implementation tools, there may be many different settings to be considered. Failing to set these options appropriately may reduce the usefulness and value of the resulting test assets.


Implement the Test

Implement navigation actions

Program, record, or generate the required navigation actions. Start by selecting your appropriate navigation method of choice. For most classes of system these days, a mouse or other pointing device is the preferred and primary medium for navigation. For example, the pointing and scribing device used with a Personal Digital Assistants (PDA) is conceptually equivalent to a mouse.

The secondary navigation means is generally that of keyboard interaction. In most cases, navigation will be made up of a combination of mouse-driven and keyboard-driven actions.

In some cases, you will need to consider voice-activated, light, visual, and other forms of recognition. These can be more troublesome to automate tests against, and may require the addition of special test-interface extensions to the application to allow audio and visual elements to be loaded and processed from file (rather than captured dynamically).

In some situations, you may want to (or need to) perform the same test using multiple navigation methods. There are different approaches that you can take to achieve this. For example:

  • Automate all of the tests using one method, and manually perform all or some subset of the tests using others
  • Separate the navigation aspects of the tests from the Test Data that characterize the specific test by providing and building a logical navigation interface that allows either method to be selected to drive the test
  • Simply mix and match navigation methods

Implement observation points

At each point in the Test Script where an observation should be taken, use the appropriate Test Oracle to capture the desired information. In many cases, the information gained from the observation point will need to be recorded and retained to be referenced during subsequent control points.

Where this is an automated test, decide how the observed information should be reported from the Test Script. In most cases, it is usually appropriate simply to record the observation in a central Test Log relative to its delta (time from the start of the Test Script). In other cases, specific observations might be output separately to a spreadsheet or data file for more sophisticated uses.

Implement control points

At each point in the Test Script where a control decision should be taken, obtain and assess the appropriate information to determine the correct branch for the flow of control to follow. The data retrieved form prior observation points is usually input to control points.

Where a control point occurs, and a decision made about the next action in the flow-of-control, record the input values to the control point, and the resulting flow that is selected in the Test Log.

Resolve errors in the test implementation

During test implementation, you will likely introduce errors in the test implementation itself. Those errors may even be the result of things that you have omitted from the test implementation, or they may be related to things that you have failed to consider in the test environment. These errors will need to be resolved before the test can be considered completely implemented. Identify each error that you encounter, and work through addressing them.

In the case of test automation that uses a programming language, this might include compilation errors due to undeclared variables and functions, or invalid use of those functions. Work your way through the error messages displayed by the compiler (or any other sources of error messages) until the Test Script is free of syntactical and other basic implementation errors.

During subsequent execution of the test, other errors in the test implementation might be found. Initially, these may appear to be failures in the target test item: you need to be diligent when analyzing test failures, so that you confirm that the failures are actually in the target test item, and not in some aspect of the test implementation.