The function test

The function test

Programs transform data according to a defined rule. Whether the program reacts correctly to test data and parameters is a key question for its test and release. Despite methods of systematic test case determination, the combination of valid test data and parameters can lead to very many test cases, which results in the necessity of a risky prioritisation when evaluating manually. Test automation has the potential to be able to mechanically evaluate a great many test cases at constantly low costs. As a result, expenses decrease and faster release cycles are achieved.

This requires three things:

  1. The result of the IT implementation must be observable and up to date ("actual result")
  2. The test automation must know the expected result or be able to calculate it ("target result")
  3. Everyone involved can rely on the test automation's result ("trust")

Test automation with RapidRep

Determining the actual results

For each test, RapidRep needs to know the actual result relevant to the respective run. As a rule, programs in data processing save their results in tables. Consequently, RapidRep can often access the actual result in form of SQL queries. Since RapidRep, in addition to all databases, also supports the most important file formats (e.g. XML or CSV), can this step be realised very easily. In the decentralised data processing, RapidRep can call the SuT (system under test) with the parameters requested. If the SuT handles the call without any errors and synchronously stored the results, the results are directly usable for the comparison. The situation is different if the call was cancelled with an error or the results are only available at a later time (asynchronous processing). RapidRep can check before the target/actual-comparison, whether the conditions for a test are met or not.

Determining the target results

The expected result for each test case can depend on many factors. The way in which RapidRep determines the expected result is also different, depending on use case and test procedure. For a test with RapidRep, a so called report definition is always required, which a test analyst creates with the RapidRep Designer. A report definition contains the test evaluation logic including an embedded Excel template.  The "model kit" in a report definition supports different methods to determine the target result:

  • Creating SQL and access to data in over 50 different data sources
  • Use of parameters which specify the test case, e.g. customer ID, order etc.
  • Definition, integration and evaluation of rules for model-based testing ("test oracle")
  • Use of the built-in script language to perform calculations or to create SQL statements
  • Use of predefined results or partial results which are available per test case, for example in the test management system
  • Formulas and VBA scripts which can form a part of the test evaluation logic in the embedded Excel template

Target/actual comparison

The Excel template embedded in the report definition is filled with the data of the test execution (target/actual) at the time of the evaluation. It has been proven as a "best practice" to provide the target/actual comparison on one sheet and to output information on further sheets, which support the business analysis and comprehensibility. It makes sense, for example, to put out the parameters and rules used. In case of a defect, detailed business data additionally facilitate the reproducability.

Embedding the test automation in the test process

Many companies employ test and defect management systems to create test cases, manage defects and get an overview of the state of the test execution. The RapidRep Test Suite supports numerous test and defect management systems (see here) and exchanges the results of the test automation with them. A test performed with the RapidRep Test Suite creates a test proof in the test management system, including the Excel workbook created, and in case of an error it generates or updates a defect.

Trust

So RapidRep can save costs and reduce the test duration. But are the results reliable? Yes, because each created test case result is completely transparent and can be reproduced at any time. The Excel workbooks, which document the whole test completely and according to IEEE 829, are available in the test management system. In the case of a rule-based target results determination, IT department and management will be early involved in ascertaining the test logic and can rely on these rules being taken into account since they are also put out in the workbooks. Results or partial results, which exist statically or as formulae for each test case, influence the test logic directly and comprehensibly.

Conclusion

It works!