The Case for Scripted Testing

QA

Scripted Testing

A while back I wrote about ‘Exploratory Testing’. Coming from a background of structured scripted testing, I saw examples where some Exploratory testing cycles had problems. People were not shy in pointing out to me that, correctly done, Exploratory testing can be a perfectly valid form of testing. Some people even proposed that it was ‘better’ than scripted testing.

In some cases, perhaps. It would seem to be particularly applicable for mass market consumer products, where your product will be used by a wide variety of people with no formal training in the product. It can reduce (but not eliminate if done right) the ‘up front’ work of testing, with the addition of additional documentation during or immediately after the testing. Done correctly, a good, customer oriented test can be provided via Exploratory. But this is not say that ‘scripted’ testing is obsolete.

What is ‘scripted’ testing? Frankly, I’m not completely sure. It would seem to be a test which is designed and documented before the test begins. During the test, the testing is ‘controlled’ by the documentation. The version I am familiar with consisted of a ‘Test Plan’ and ‘Testcases’.

The Test Plan is an overall document covering all the elements of the test. This would include a description of the thing to be tested, a list of the hardware, software, tools and personnel required, any assumptions or restrictions, schedule (or a pointer to it, if controlled externally), the types of testing to be done, assignments, Entry and Exit criteria and so on. It may contain complete testcases, testcase summaries, or pointers to where the testcases are/will be.

This document is generated by the test team, and reviewed/approved by development and management. This process tends to ensure that the testers understand the product to be tested, and are headed in the right direction to adequately test it.

Testcases are the actual tests to be run. The terms Testcase and Test Scenario can mean pretty much the same thing; in our lexicon, we considered something which a computer could run as a Testcase and something written for a human to run as a Test Scenario. Either should contain a name (for tracking and identification), author(s), date/version, a brief description of what it will test, any requirements and perhaps other information the group thinks in necessary. This is the ‘header’ or ‘prolog’.

A note about the name; as mentioned it must ‘fit’ into whatever testcase execution tool is being used. Practically, 8 characters seems about right, with the leading letters providing a clue about what the testcase is for, and the trailing numbers providing the ability to have several related testcases. Usually the Test Plan includes a definition of the meanings and possible values for character positions in the testcase name.

An example of a testcase name might be: ECCREJ02 where the testcase is for release 5 (E), testing credit card transactions (CC) where the card is invalid or rejected (REJ) and this is the second one in that set (02).

The actual testcase or scenario should have instructions detailed enough that a tester can follow them, or coded correctly so a computer can execute them. In both cases, there should be ‘success’ criteria adequate to ensure that the tester or the computer can decide if the testcase was a ‘Success’ or a ‘Failure’. Or possibly ‘needs Analysis’.

Again, the testcases should be reviewed and approved by the developers (to ensure the testcases are valid and provide adequate coverage) and other testers (to ensure the testcases are practical to implement). Then to run the test, the testcases are submitted or the scenarios followed.

This sounds inflexible, and it can indeed be so if followed rigidly. However, good testers, working for good project management are not only allowed to, but encouraged to, engage in additional testing suggested by the testcases. This can approach being an Exploratory component of the test and can significantly reduce the chances that a bug thought to exist outside the planned path is ignored.

Having the plan and testcases reviewed and approved (as in having people put their names to it formally) tends to reduce the chances that serious misinterpretations of the product to be tested are included in the documents.

Of course, if these components of a scripted test are NOT followed, it can indeed cause the test to be poor. Just like if there is no ‘map’ of what to test by Exploration, and no recording of what is actually done, then that test can also be poor.

So, Exploratory testing can be good for some products and Scripted testing can be good for other products, and perhaps in some cases a combination of both would be optimum.