Stay up to date.


New release: RapidRep 5.8.7

RapidRep® is now available in version 5.8.7. We have compiled the improvements in detail for you.

Big Data Connectors

RapidRep covers a wide range of leading big data solutions. These include:

  • Impala
  • Hive
  • Spark SQL

Impala, Hive and BIG SQL are interfaced with a pre-configured, ready-to-use JDBC driver configuration.

Customers using Spark SQL can leverage a library and develop programs in Java or Scala.

Data Analysis Grid

Business analysts and data scientists can perform data analysis quickly and flexibly with the newly developed data grid. Numerous functions provide insights into the data with a single click or drag & drop. Features include:

  • grouping (columns and rows)
  • sorting
  • statistical methods (average, median, min, max, ...)
  • identification of duplicates
  • filtering
  • fixing Columns and lines
  • coloring cells, manually or by condition
  • column-wide search for values ​​with Ctrl-F
  • formatted exports to Excel

With the new grid you can analyze very long and very wide tables fluently and get a quick overview of the data.

Improvements in reading CSV

The speed of reading CSV files has been improved by up to 20% compared to the previous version. The parallel reading of up to 10 CSV read operations also leads to significantly faster overall runtimes.

If CSV structures often change or the structure is not known until runtime, as of this version, the CSV structure used for reading can be created dynamically at runtime with the help of a few lines of code.

The CSV Wizard now automatically detects headers, separators and text qualifiers. All options are of course still configurable in the dialog.

Data Quality Solution (Blue Print)

The new version of the Rule Library and the supplied data quality examples can be used as a blueprint for customer-specific solutions.

The policies support row and set based checks. Check logics in the form of parameterizable blocks increase maintainability.

KPIs are individually definable while the measurement of KPIs can be defined at the level of individual rules.

The measurement series are optionally stored using ETL and errors can be graphically displayed in the course.

Improvements in ETL processes

RapidRep offers a new API function for cross-system Extract-Transform-Load (ETL) operations. Compared to the previous version, this is 2 to 3 times faster than the previous writing via the internal database engine! The so-called fetch size and the commit size are configurable at the script level.

The reading and writing of data within a system (e.g., Oracle) is automatically delegated to the appropriate database management system.

Configuration management for data sources

A report definition that uses a JDBC data source is typically set up for a particular environment, e.g. Dev, Test or Prod. The JDBC properties and User / Password often vary depending on the environment. Starting with RapidRep 5.8.7, there can be any number of different alternatives for each "logical" JDBC data source. These alternatives can be selected in Report Runner or in batch mode.  
In this way, shared report definitions can be distributed to different environments.

SQL window functions in the internal engine

The previous H2 database engine used in RapidRep was replaced after intensive testing and replaced with a current version of H2. Therefore, when creating queries, the H2 window functions, e.g. RANK, LEAD, LAG, NTILE, NTH_VALUE, etc. are allowed.

This and other information about available H2 features can be found on the official H2 website at

Productivity improvements in the designer

For those who develop solutions, we've made the integrated development environment even more productive through a variety of measures. Here some examples.

  • With drag & drop, all scripts or functions in a folder can be copied at once. At the same time, the new objects are consistently renamed. The user can use naming conventions such as Specify prefixes or suffixes in the dialog.
  • SQL tabs can be extracted as separate windows. Parallel analysis of data e.g. on multiple monitors is simplified.
  • For functions with multiple arguments (for example, from the Rule Library), the editor displays which parameter of which data type is expected in the proposal and also when clicking on the location.
  • Writing SQL statements is now simplified via special SQL templates, e.g. type 'S', Ctrl + Space and Enter and it will appear: SELECT * FROM Table
  • The database browser can generate DML and DDL statements via the context menu, e.g. for SELECT, INSERT, UPDATE, DELETE, DROP.
  • The metadata from SQL can be copied in blocks and, for example, transferred to a set of rules.
  • The Code Editor now also supports the so-called block mode in which a block consisting of several columns and rows can be marked and changed at the same time. Activation is via Alt key and "drag"
  • Improved version management: recognize directly if you are working with the latest version; warning when creating a new version, if not based on the latest
  • Search, replace in SQL code now with "Advanced (\ n, \ r, \ t)" and "RegExp Capture-Group" option
  • Extension in the File API: archive files automatically. As Zip, 7z, TAR, ..
  • Faster navigation: Ctrl + click on a function, variable: function, view variable
  • Ctrl + click on a table name in a SQL: this will open a new window with 'SELECT * FROM table' inside
  • Performance optimizations when loading the DB browser
  • Performance optimization in rendering (now with Apache POI)

Visualization of executions

In addition to the static graph for depicting script dependencies, SQL Visualization provides another graphical tool. SQL Visualization creates a graph while executing a script that fully represents all steps and dependencies.

The graph contains the following elements:

  • user variables
  • used tables and connections
  • models
  • rule sets
  • scripts
  • rendering targets

The graph can be filtered by various criteria and saved as an image. Perfect for understanding a script and perfect for being embedded in a documentary!

The following graphs are preconfigured:

  • data flow
  • design flow
  • control flow
  • performance
  • Completely

Further adjustments and feature requests

You can always send us your own suggestions for new features using the contact options on the homepage.

We look forward to your feedback and wishes and suggestions for additional features!

Go back