Design tests
Select a set of Test Cases to develop into detailed, executable Test Suite. Sketch an outline of the Test Suite as a
logical sequence of steps. Review the data requirements of the Test Case, and determine if existing data sets are
sufficient, or if you need to develop new test data for this Test Suite. Examine Supporting Requirements that apply to this
Test Case, and note where they affect the expected results of a step. Develop a detailed, procedural Test Suite based on
your design. Use a request-response style that declares an exact input, and expects an exact output. |
Execute tests
Select an implementation technique for this design. At a minimum, determine if the Test Suite will be manual or automated.
If the Test Case is well understood, it's best to implement an automated Test Script without first writing a manual
procedure. However, if the Test Case is new or novel, writing a manual Test Script can help validate the design of the test
and aid collaboration with other team members. |
Validate class in isolation
Validate the class apart from the system. Ensure that the QoS requirements were achieved for the class as a result of the
optimization. It is common to use specially created "testing buddy" classes, "webifying" your application to automatically
create web pages to drive the execution, or use model-level debugging facilities to validate the class. |
Validate class in collaboration
Validate the system after re-connecting the class to its collaboration. The design collaborations (collaborations
made up of design classes) must also be able to replicate the use case sequences. In addition, the quality of service and
other optimization concerns must also be met. |
Analyze and communicate test results
Analyze the results of the execution. You should always document any defects that a test finds, and decide whether to
immediately repair, deferred repair, or ignore deviations from the expected results. |
|