Incremental Batch Reader
You can use Incremental Batch Reader to get new data from a supported database or data warehouse. The most common use case for Incremental Batch Reader is to continuously update targets with new source data after initial load using Database Reader has completed, particularly when a CDC reader is not available for the source or when you choose not to use the available CDC reader.
Incremental Batch Reader supports the following sources:
BigQuery (see BigQuery continuous incremental replication)
DB2 for z/OS (see Db2 for z/OS continuous incremental replication)
HP NonStop SQL/MX (and SQL/MP via aliases in SQL/MX) versions 3.2.1 to 3.8.1
MariaDB (see MariaDB continuous incremental replication)
MySQL (see MySQL continuous incremental replication)
Oracle (see Oracle Database continuous incremental replication)
PostgreSQL (see PostgreSQL continuous incremental replication)
Snowflake (see Snowflake continuous incremental replication)
SQL Server (see SQL Server continuous incremental replication)
Sybase ASE versions 16.0 and later (Linux)
Teradata
YugabyteDB
To read incrementally from Spanner, see Spanner.
Three properties control incremental reading:
Check Column specifies the column Incremental Batch Reader will use to identify which data is new.
Start Position specifies the Check Column value from which reading will start.
Polling Interval controls how often Incremental Batch Reader checks for new data. The first batch will begin with the Start Position value. Each successive batch will begin with the table rows immediately after the last row included in the previous batch.
Warning
The drivers for BigQuery, Oracle, Snowflake, and SQL Server are bundled with the Forwarding Agent. For all other databases, when this adapter is deployed to a Forwarding Agent, the appropriate JDBC driver must be installed as described in Installing third-party drivers in the Forwarding Agent.
Differences between Incremental Batch Reader and CDC readers
SQL CDC readers capture INSERT, UPDATE, and DELETE operations, so if a writer supports Merge mode, all changes in the source can be replicated in the target. Incremental Batch Reader treats both INSERT and UPDATE operations as inserts and does not capture deletes, so a writer in Merge mode will not be able to replicate all changes to the source in the target, and in Append Only mode deletes in the source will not be captured in the target.
Incremental Batch Reader properties
This section applies when using Incremental Batch Reader with HP NonStop, Sybase, or Teradata. For other sources, see:
BigQuery (see BigQuery continuous incremental replication)
DB2 for z/OS (see Db2 for z/OS continuous incremental replication)
MariaDB (see MariaDB continuous incremental replication)
MySQL (see MySQL continuous incremental replication)
Oracle (see Oracle Database continuous incremental replication)
PostgreSQL (see PostgreSQL continuous incremental replication)
Snowflake (see Snowflake continuous incremental replication)
SQL Server (see SQL Server continuous incremental replication)
YugabyteDB
property | type | default value | notes |
|---|---|---|---|
Check Column | String | Specify the name of the column containing the start position value. The column must have an integer or timestamp data type (such as the creation timestamp or an employee ID number) and the values must be unique and continuously increasing. For Db2 for z/OS, specify a BIGINT, INTEGER, or TIMESTAMP column. Use the syntax If you specify multiple tables in the Tables property, you may specify different check columns for the tables separated by semicolons. In this case, you may specify the check column for the remaining tables using wildcards: for example, When rows have the same Check Column value, they will be buffered in memory until the a row with a different Check Column value is read. To avoid memory usage spikes, do not set Check Column so that many rows will have the same value. For example, selecting a single Check Column containing the day's date (rather than a timestamp) might result in thousands or millions of rows with the same value. | |
Connection Pool Size | Integer | 1 | Specify the maximum number of connections that Striim will use. This is the maximum number of tables Striim will be able to read simultaneously, provided that ThreadPool Size is set to the same or a higher number. |
Connection Profile Name | enum | Not applicable to HP NonStop, Sybase, or Teradata. | |
Connection URL | String | See Connection URL notes in Database Reader. | |
Database Provider Type | String | Default | Not applicable to HP NonStop, Sybase, or Teradata. |
Excluded Tables | String | Data for any tables specified here will not be returned. For example, if | |
Fetch Size | Integer | 100 | Sets the maximum number of records to be fetched from the database in a single JDBC method execution (see the discussion of fetchsize in the documentation for your JDBC driver). |
Password | encrypted password | The password for the specified user. See Encrypted passwords. | |
Polling Interval | String | 120sec | This property controls how often the adapter reads from the source. By default, it checks the source for new data every two minutes (120 seconds). If there is new data, the adapter reads it and sends it to the adapter's output stream. The value may be specified in seconds (as in the default) or milliseconds (for example, 500ms). |
Return DateTime As | String |
| If no value is specified, timestamp values are re turned as Joda timestamps. Set to |
SSL Config | String | Not applicable to HP NonStop, Sybase, or Teradata. | |
Start Position | String |
| The value in the specified check column from which Striim will start reading. Striim will read rows in which the check column's value is the same as or greater or later than this value and skip the other rows. Since Check Column may specify multiple tables you must specify the corresponding table name or wildcard for each value. With the default value |
Tables | String | Specify the table(s) or view(s) to be read as You may specify multiple tables and views as a list separated by semicolons or with the | |
ThreadPool Size | Integer | 5 | Specify the maximum number of threads the adapter will use. This is the maximum number of tables Striim will be able to read simultaneously, provided that Connection Pool Size is set to the same or a higher number. |
Use Connection Profile | Boolean | Not applicable to HP NonStop, Sybase, or Teradata. | |
Username | String | Specify the DBMS user name the adapter will use to log in to the server specified in ConnectionURL. This user must have SELECT permission or privileges on the tables specified in the Tables property. |
Incremental Batch Reader sample code
The following would read rows from TABLE1 with UUID column values equal to or greater than 1234:
CREATE SOURCE OraSource USING IncrementalBatchReader ( Username: 'striim', Password: '********', ConnectionURL: '192.0.2.:1521:orcl', Tables: 'MYSCHEMA.TABLE1', CheckColumn: 'MYSCHEMA.TABLE1=UUID', StartPosition: 'MYSCHEMA.TABLE1=1234' ) OUTPUT TO OraSourceOutput;
If IncrementalBatchReader sends duplicate records to a DatabaseWriter target, by default the application will terminate. This can happen, for example, when recovery is enabled (see Recovering applications), there are multiple rows with the same CheckColumn timestamp, and only some of them were written before a system failure, To avoid this, specify the appropriate IgnorableException in the target: for example, for CosmosDBWriter, RESOURCE_ALREADY_EXISTS.
Incremental Batch Reader data type support and correspondence
Incremental Batch Reader supports the same data types as Database Reader. See Database Reader programmer's reference.