Types of use of the Test Suite in test projects

Here you find a collection of possible types of use of the RapidRep Test Suite. Terms used are based on the vocabulary of the ISTQB (International Software Testing Qualification Board). You can download the glossary in English from the Downloads-page of the German Testing Board.

Automated execution and evaluation of test cases

For dynamic tests, RapidRep starts the program (system under test) and calls functions whose effect is usually dependent on test data. The result produced by the system under test is saved if necessary,so that the actual result will not be overwritten by subsequent calls.

In static tests, the test results are already available and RapidRep has direct access to the actual results.

RapidRep can either use the target result required for the automated actual/target comparison if it is already available or determine the target result itself if it is unknown:

  • Expected results are known => use as target result
  • Expected results are derived (e.g. migration) => determine target result
  • Expected results are unknown => RapidRep determines the target result ("test oracle"). See model-based testing.

When performing the target/actual comparison, RapidRep always creates an Excel workbook in which the execution of a test case run or a defect is described in detail. The procedure complies with all relevant ISO / IEC / IEEE test standards.

RapidRep executes any or all test cases via the Test Runner GUI or in a batch file by processing Testing CLI commands.

Exchange of test results with a test and defect management system

RapidRep can exchange test results with a test and defect management system. The Test Suite communicates with all systems (see Supported systems) in both directions via their respective API. The test result, including the Excel workbook, is appended as new "run" to the appropriate test case. In case of failure, RapidRep can create a new defect or update an existing one. The attached Excel workbook contains all the relevant details to analyse the error. In this way progress reporting in the test management system is always based on current data.

This step is optional and in agile testing projects it is often only used at the system and acceptance test level.

Implementation of the test evaluation logic

The integrated development environment of RapidRep allows a transparent and elegant implementation of the logic needed to evaluate test cases. All relevant steps to perform the test are stored in a versioned so-called report definition within a repository database.

With the help of SQL, two scripting languages, Excel formulas and a rule engine, simple and complex evaluation logic can be created both understandable and maintainable. The use of user-defined Excel workbooks as output medium provides a neat and clear separation of layout and content. The runtime environment of RapidRep (Test Runner or Testing CLI) calls the test implementation with the desired parameters.

Providing test case data

Often certain criteria must be fulfilled before performing a test case. The related operations, for example delete a data table, copy a file, etc. are part of the implemented test evaluation logic.

RapidRep can automatically determine from a database the test data needed for a particular test case or an entire test case portfolio. In general, a rule-based approach is applied, which for example identifies and stores records with matching criteria on a production-like database.

Using the RapidRep API, test data may also be obfuscated to meet audit requirements.

Test approach

V-model
The Test Runner assigns a parametrised test logic to each concrete test case in a test management system. This one-time step is necessary in order to tell RapidRep how it can determine the result passed/failed of the test cases. This procedure applies to all levels of testing (see below).

Agile
RapidRep also supports testing in agile software development projects in that adhoc tests are startable directly from within the development environment. The quality achieved during a sprint or at the end of a sprint can be determined by automated testing at the push of a button.

Test object

The RapidRep test suite can automatically execute test cases for the back-end. Test objects in the back-end do not have any dialog interface (GUI). For providing the input data (test data), calling the program (system under test) and to observe the results (actual result) no user interface is required.

Test objects in the back-end include:

  • Programs that process structured input data and generate structured (interim) results
  • Structured business data of all types (raw data or results).

RapidRep can access more than 50 different data sources (see Supported test objects ).

Test level

Module and module integration test, Functional and technical system test, Acceptance test

Test technique

White box, Grey box, Black box

Test typ

  • Function testing
  • Interface testing
  • Regression testing
  • Maintenance testing
  • Migration testing
  • Mutation testing

Test method

Automated testing
The evaluation of a test case is always performed automatically, regardless of whether the test cases are started from within the Test Runner GUI or as part of Testing CLI commands stored in a batch file.

Model-based testing
Especially with complex functional tests in which the desired outcome is difficult to determine, RapidRep is often used as a "test oracle". Rule sets which are evaluated by the integrated rule engine serve to build a lean reference model, which RapidRep uses directly to determine the expected results.