Requirements for an Automation Framework

While the tools available in the market include some that allow the creation of drivers by use of proprietary script languages, such tools are less suitable for the "developer as a tester" role. First, learning a new language represents a barrier and, second, the concurrent use of two languages demands constant mental "switching." For this reason, our first requirement for an automation environment states:

The language used to specify tests is the coding language itself.

In our case, the coding language is Java. This means that we can use the same tools (e.g., version management) to handle both the test code and the app code. One drawback is that the test specification is contained in the code only implicitly: neither input data nor output data is marked as such but is distributed all over the program code. A frequently used approach for class-based test drivers provides a static method that runs our tests for each class [Hunt98]. Another approach is to page out the tests into a dedicated tester class. We opted for this variant earlier in . The benefits and drawbacks of these two approaches are shown in Table 2.1 [McGregor01, p. 185].

Table 2.1: Benefits and drawbacks of test drivers.



Static method in CUT (class under test)

Allows access to private parts of the class

Easy reuse of the test code in subclasses

Does not separate app code and test code

More code is in the app

Separate tester class

Separates app and test code

Test code can be organized independently of the class structure

Requires an additional class

Private parts of the class cannot be accessed for white-box tests

Our main argument is the separation of app code and test code, which is important mainly for software shipping. For this reason, our second requirement states:

We need to separate app code from test code.

The granularity used to specify, run, and verify tests is normally a test case. Pol et al. [00, p. 528] define a test case as such: "A test case describes a test to be executed, which is oriented to a specific test objective." This description has to include the target object, input and output parameters, the context, and side effects. In the case of executable tests, all points reflect also in the program code. The decisive aspect is that the execution of a test case does not have any impact on subsequent test cases. Otherwise, dependencies between tests can cause single errors to have a nonlocal impact. In other words, if we rely on a specific order of the test run, then the failure of one test will cause a false alarm in a subsequent test. For this reason, we establish another requirement:

Test cases have to be executed and verified separately from one another.

The independence of single test case conflicts with the fact that we need a means to organize related tests to be able to handle them jointly. Such a group of test cases is called a test suite. Our next requirement states:

We need a way to arbitrarily group test cases into test suites.

When taking a closer look at our first attempt to obtain a test driver in , we can see that the success or failure of a test was communicated only by a text output, for example:

if (!translation.equals("word")) {
 System.out.println("Test case 1 failed...");
} else {
 System.out.println("Test case 1 successful.");

Although this may be acceptable for a single test case, for 20 or 100 or 5000 tests on hand, we would have to search several pages of text for "successful" and "failed" to find the number of successful and failed test cases. For this reason, our last requirement for a test automation environment states:

The success or failure of a test run should be visible at a glance.

This set of requirements will serve us as a basis to evaluate frameworks for unit test automation. Note that no tool is perfect. The tester or developer will have to ensure that the rules and "best practices" are observed wherever the selected framework may fail to enforce these rules.