Oracle Database programmer's reference
Oracle Reader properties
Before you can use this adapter, Oracle must be configured as described in the parts of Configuring Oracle to use Oracle Reader that are relevant to your environment.
Note
Before deploying an Oracle Reader application, see Runtime considerations when using Oracle Reader.
Striim provides wizards for creating applications that read from Oracle and write to various targets. SeeCreating an application using a wizard for details.
The adapter properties are:
property | type | default value | notes |
---|---|---|---|
Bidirectional Marker Table | String | When performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive. | |
CDDL Action | enum | Process | 18c and earlier only. Visible in Flow Designer only when CDDL Capture is enabled. See Handling schema evolution. |
CDDL Capture | Boolean | False | 18c and earlier only: enables schema evolution (see Handling schema evolution). Visible in Flow Designer only when Dictionary Mode is Offline Catalog. When set to True, Dictionary Mode must be set to Offline Catalog and Support PDB and CDB must be False. Do not use Find and Replace DDL unless instructed to by Striim support. |
Committed Transactions | Boolean | True | LogMiner only: by default, only committed transactions are read. Set to False to read both committed and uncommitted transactions. |
Compression | Boolean | False | If set to True, update operations for tables that have primary keys include only the primary key and modified columns, and delete operations include only the primary key. With the default value of False, all columns are included. See Oracle Reader example output for examples. Set to True when Oracle Reader's output stream is the input stream of Cassandra Writer. |
Connection Retry Policy | String | timeOut=30, retryInterval=30, maxRetries=3 | With the default setting:
Negative values are not supported. |
Connection URL | String |
When connecting through an SSH tunnel (see Using an SSH tunnel to connect to a source or target), specify the IP address of the tunnel. in place of the host name. If using Oracle 12c or later with PDB, use the SID for the CDB service. (Note that with DatabaseReader and DatabaseWriter, you must use the SID for the PDB service instead.) If using Amazon RDS for Oracle, the connection URL is To use Oracle native network encryption, append | |
Database Role | String | PRIMARY | Leave set to the default value of PRIMARY except when you Reading from a standby. |
Dictionary Mode | String | OnlineCatalog | Leave set to the default of OnlineCatalog except when CDDL Capture is True or you are Reading from a standby. |
Excluded Tables | String | Data for any tables specified here will not be returned. For example, if | |
Exclude Users | String | Optionally, specify one or more Oracle user names, separated by semicolons, whose transactions will be omitted from OracleReader output. Possible uses include:
| |
External Dictionary File | String | Visible in Flow Designer only when Database Role is PHYSICAL_STANDBY. Leave blank except when you Reading from a standby. | |
Fetch Size | Integer |
| LogMiner only: the number of records the JDBC driver will return at a time. For example, if Oracle Reader queries LogMiner and there are 2300 records available, the JDBC driver will return two batches of 1000 records and one batch of 300. |
Filter Transaction Boundaries | Boolean | True | With the default value of True, BEGIN and COMMIT operations are filtered out. Set to False to include BEGIN and COMMIT operations. |
Ignorable Exception | String | Do not change unless instructed to by Striim support. | |
Partial Record Policy | String | This property enables enable fetching column values from the database tables when the values are partially available or not available in the database transaction log. It allows the adapter to query and fetch supported/unsupported columns from the source database as needed. For example:
The LookupUsing parameter can be used to fetch the record from the flashback or from the latest snapshot of the table. The value should be SCN or PKEY for Oracle, or PKEY for MS SQL Server.
The SCN should be used if Flashback is enabled in the Oracle database. The Partial Record Policy fetches the record from the flashback database based on the SCN value. It uses the "AS OF SCN'' clause in the SELECT query created with the primary key column in the WHERE clause. This is the default behavior of the Partial Record Policy for Oracle.
The PKEY should be used if you need to fetch the record based on the primary key. If you do not specify a If no SCN is found for the table in the flashback database because the retention period has expired, the Partial Record Policy automatically switches to PKEY and fetches the record using the primary key. If you do not specify a See Using the Partial Record Policy to fetch unsupported columns for more information. | |
Password | encrypted password | the password specified for the username (see Encrypted passwords) | |
Queue Size | Integer |
| |
Quiesce Marker Table | String | QUIESCEMARKER | See Creating the QUIESCEMARKER table for Oracle Reader. Modify the default value if the quiesce marker table is not in the schema associated with the user specified in the Username. Three-part CDB / PDB names are not supported in this release. |
Send Before Image | Boolean | True | set to False to omit |
Set Conservative Range | Boolean | False | If reading from Oracle 19c, you have long-running transactions, and parallel DML mode is enabled (see Enable Parallel DML Mode), set this to True. |
SSL Config | String | If using SSL with the Oracle JDBC driver, specify the required properties. Examples: If using SSL for encryption only: oracle.net.ssl_cipher_suites= (SSL_DH_anon_WITH_3DES_EDE_CBC_SHA, SSL_DH_anon_WITH_RC4_128_MD5, SSL_DH_anon_WITH_DES_CBC_SHA) If using SSL for encryption and server authentication: javax.net.ssl.trustStore= /etc/oracle/wallets/ewallet.p12; javax.net.ssl.trustStoreType=PKCS12; javax.net.ssl.trustStorePassword=******** If using SSL for encryption and both server and client authentication: javax.net.ssl.trustStore= /etc/oracle/wallets/ewallet.p12; javax.net.ssl.trustStoreType=PKCS12; javax.net.ssl.trustStorePassword=********; javax.net.ssl.keyStore=/opt/Striim/certs; javax.net.ssl.keyStoreType=JKS; javax.net.ssl.keyStorePassword=******** | |
Start SCN | String | Optionally specify an SCN from which to start reading (See Replicating Oracle data to another Oracle database for an example). Do not specify a start point prior to when supplemental logging was enabled. If you are using schema evolution (see Handling schema evolution, set a Start SCN only if you are sure that there have been no DDL changes after that point. See also Switching from initial load to continuous replication. | |
Start Timestamp | String |
| With the default value of null (blank), only new (based on current system time) transactions are read. Specify a timestamp to read transactions that began after that time. The format is DD-MON-YYYY HH:MI:SS. For example, to start at 5:00 pm on July 15, 2017, specify 15-JUL-2017 17:00:00. Do not specify a start point prior to when supplemental logging was enabled. If you are using schema evolution (see Handling schema evolution, set a Start Timestamp only if you are sure that there have been no DDL changes after that point. |
Support PDB and CDB | Boolean | False | Set to True if reading from CDB or PDB. |
Tables | String | The table or materialized view to be read (supplemental logging must be enabled as described in Configuring Oracle to use Oracle Reader) in the format <schema>.<table>. (If using Oracle 12c with PDB, use three-part names: <pdb>.<schema>.<table>.) Names are case-sensitive. Do not modify this property when CDDL Capture is True or recovery is enabled for the application. You may specify multiple tables and materialized views as a list separated by semicolons or with the % wildcard. For example, Unused columns are supported. Values in virtual columns will be set to null. If a table contains an invisible column, the application will terminate. Table and column identifiers (names) may not exceed 30 bytes when using one-byte character sets. When using two-byte character sets, the limit is 15 characters. Oracle character set AL32UTF8 (UTF-8) and character sets that are subsets of UTF-8, such as US7ASCII, are supported. Other character sets may work so long as their characters can be converted to UTF-8 by Striim. See also Specifying key columns for tables without a primary key. | |
Transaction Buffer Disk Location | String | .striim/LargeBuffer | Visible in Flow Designer only when Transaction Buffer Type is Disk. See Transaction Buffer Type. |
Transaction Buffer Spillover Size | String | 100MB | When Transaction Buffer Type is Disk, the amount of memory that Striim will use to hold each in-process transaction before buffering it to disk. You may specify the size in MB or GB. When Transaction Buffer Type is Memory, this setting has no effect. |
Transaction Buffer Type | String | Disk | When Striim runs out of available Java heap space, the application will terminate. With Oracle Reader, typically this will happen when a transaction includes millions of INSERT, UPDATE, or DELETE events with a single COMMIT, at which point the application will terminate with an error message such as "increase the block size of large buffer" or "exceeded heap usage threshold." To avoid this problem, with the default setting of Disk, when a transaction exceeds the Transaction Buffer Spillover Size, Striim will buffer it to disk at the location specified by the Transaction Buffer Disk Location property, then process it when memory is available. When the setting is Disk and recovery is enabled (see Recovering applications), after the application halts, terminates, or is stopped the buffer will be reset, and during recovery any previously buffered transactions will restart from the beginning. To disable transaction buffering, set Transaction Buffer Type to Memory. |
Username | String | the username created as described in Configuring Oracle to use Oracle Reader; if using Oracle 12c or later with PDB, specify the CDB user (c##striim) |
Specifying key columns for tables without a primary key
If a primary key is not defined for a table, the values for all columns are included in UPDATE and DELETE records, which can significantly reduce performance. You can work around this by setting the Compression property to True and including the KeyColumns option in the Tables property value. The syntax is:
Tables:'<table name> KeyColumns(<COLUMN 1 NAME>,<COLUMN 2 NAME>,...)'
The column names must be uppercase. Specify as many columns as necessary to define a unique key for each row. The columns must be supported (see Oracle Reader and OJet data type support and correspondence) and specified as NOT NULL
.
If the table has a primary key, or the Compression property is set to False, KeyColumns will be ignored.
OJet properties
Before you can use this adapter, Oracle must be configured as described in Configuring Oracle and Striim to use OJet.
Note
Before deploying OJet on a Forwarding Agent, install the Oracle Instant Client as described in Install the Oracle Instant Client in a Forwarding Agent.
Before deploying an OJet application, see Runtime considerations when using OJet.
Striim provides wizards for creating applications that read from Oracle and write to various targets. SeeCreating an application using a wizard for details.
The adapter properties are:
property | type | default value | notes |
---|---|---|---|
Bidirectional Marker Table | String | When performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive. This property appears only if your Striim cluster has been licensed for bidirectional support. | |
CDDL Action | enum | Process | Visible in Flow Designer only when CDDL Capture is enabled. See Handling schema evolution. |
CDDL Capture | Boolean | False | See Handling schema evolution. Do not use Find and Replace DDL unless instructed to by Striim support. |
Compression | Boolean | False | If set to True, update operations for tables that have primary keys include only the primary key and modified columns, and delete operations include only the primary key. With the default value of False, all columns are included. See Oracle Reader example output for examples. Set to True when OJet's output stream is the input stream of Cassandra Writer. |
Connection Retry Policy | String | timeOut=30, retryInterval=30, maxRetries=3 | With the default setting:
Negative values are not supported. |
Connection URL | String |
When connecting through an SSH tunnel (see Using an SSH tunnel to connect to a source or target), specify the IP address of the tunnel in place of the host hame. If using Oracle 12c or later with PDB, use the SID for the CDB service. (Note that with DatabaseReader and DatabaseWriter, you must use the SID for the PDB service instead.) If Downstream Capture is enabled, specify the connection URL for the downstream database. Otherwise, specify the connection URL for the primary database. If the specified connection URL is invalid, deployment will fail with an "ORA-12170: TNS:Connect timeout occurred" error. Note that this error will also occur if Striim is unable to connect to Oracle for any other reason, such as a network outage or the database being offline. To use Oracle native network encryption, see Using Oracle native network encryption with OJet. Do not use the encryption_types_client Connection URL option as you would in Oracle Reader. | |
Downstream Capture | Boolean | False | If set to True, downstream capture is enabled. |
Downstream Capture Mode | String | None | Visible in Flow Designer only when Downstream Capture is True.
|
Excluded Tables | String | Data for any tables specified here will not be returned. For example, if | |
Filter Transaction Boundaries | Boolean | True | |
OJet Config | String |
| Optionally specify a JSON string that specifies the configuration of OJet reader components. All configuration values are disabled by default. It uses the following format in Flow Designer: { "<Component name>" : [ "<Parameter name:value>" ] ,... } In TQL, the quotes must be escaped with backslashes: \"<Component name>\" : [\"<Parameter name:value>\"] The components are
For example: { "OJET":[ “queuesize:20000” ], "OJET":[ “open_txn_delay_time:60000” ], "CAPTURE":[ “fetch_lcr_attributes:true” ] } |
Partial Record Policy | String | This property enables enable fetching column values from the database tables when the values are partially available or not available in the database transaction log. It allows the adapter to query and fetch supported/unsupported columns from the source database as needed. For example:
The LookupUsing parameter can be used to fetch the record from the flashback or from the latest snapshot of the table. The value should be SCN or PKEY for Oracle, or PKEY for MS SQL Server.
If no SCN is found for the table in the flashback database because the retention period has expired, the Partial Record Policy automatically switches to PKEY and fetches the record using the primary key. If you do not specify a
The PKEY should be used if you need to fetch the record based on the primary key. If you do not specify a The SCN should be used if Flashback is enabled in the Oracle database. The Partial Record Policy fetches the record from the flashback database based on the SCN value. It uses the "AS OF SCN'' clause in the SELECT query created with the primary key column in the WHERE clause. This is the default behavior of the Partial Record Policy for Oracle. See Using the Partial Record Policy to fetch unsupported columns for more information. | |
Password | encrypted password | The password for the Oracle user specified in Username. | |
Primary Database Connection URL | String | Visible in Flow Designer only when Downstream Capture is True. If Downstream Capture is enabled, specify the connection URL for the database that is shipping the redo logs to the downstream database. In an Active Data Guard environment, this may be the primary database or a standby. Do not use a property variable for this property. | |
Primary Database Link Address | String | Visible in Flow Designer only when Downstream Capture is True. If the downstream database connects to the primary database using a different address than the one specified in Primary Database Connection URL, specify it here. | |
Primary Database Password | encrypted password | Visible in Flow Designer only when Downstream Capture is True. If Downstream Capture is enabled , specify the password for the user specified in Primary Database Username. | |
Primary Database Username | String | Visible in Flow Designer only when Downstream Capture is True. If Downstream Capture is enabled, specify the Oracle user you created as described in Configuring OJet with Active Data Guard. Do not use a property variable for this property. | |
Send Before Image | Boolean | True | Set to False to omit |
SSL Config | String | If using SSL with the Oracle JDBC driver, specify the required properties using the syntax | |
Start SCN | String | Optionally specify an SCN from which to start reading (See Replicating Oracle data to another Oracle database for an example). Do not specify a start point prior to when supplemental logging was enabled. When you set a Start SCN, before running the application trigger a dictionary build by running this command: EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS); SELECT CURRENT_SCN FROM V$DATABASE; See also Switching from initial load to continuous replication. | |
Start Timestamp | String |
| With the default value of null (blank), only new (based on current system time) transactions are read. If a timestamp is specified, transactions that began after that time are also read. The format is DD-MON-YYYY HH:MI:SS. For example, to start at 5:00 pm on July 15, 2017, specify 15-JUL-2017 17:00:00. Do not specify a start point prior to when supplemental logging was enabled. |
Tables | String | The table or materialized view to be read (supplemental logging must be enabled as described in Configuring Oracle to use Oracle Reader) in the format <schema>.<table>. (If using Oracle 12c or later with PDB, use three-part names: <pdb>.<schema>.<table>.) Names are case-sensitive. Do not modify this property when CDDL Capture is True or recovery is enabled for the application. You may specify multiple tables and materialized views as a list separated by semicolons or with the % wildcard. For example, Unused columns are supported. Values in virtual columns will be set to null. If a table contains an invisible column, the application will terminate. When reading from Oracle 11g or 12c Release 1 version 12.1, table and column identifiers (names) may not exceed 30 bytes. When using one-byte character sets, the limit is 30 characters. When using two-byte character sets, the limit is 15 characters. When reading from Oracle 12c Release 2 version 12.2 or later, table and column identifiers (names) may not exceed 128 bytes. When using one-byte character sets, the limit is 128 characters. When using two-byte character sets, the limit is 64 characters. Oracle character set AL32UTF8 (UTF-8) and character sets that are subsets of UTF-8, such as US7ASCII, are supported. Other character sets may work so long as their characters can be converted to UTF-8 by Striim. See also Specifying key columns for tables without a primary key. | |
Transaction Age Spillover Limit | Integer | 1000 | OJet begins to spill messages from the Oracle server's memory to its hard disk for a particular transaction when the amount of time that any message in the transaction has been in memory exceeds the specified number of seconds. |
Transaction Buffer Spillover Count | Integer | 10000 | OJet begins to spill messages from the Oracle server's memory to its hard disk for a particular transaction when the number of messages in memory for the transaction exceeds the specified number. |
Username | String | The name of the OJet user created as described in Running the OJet setup script on Oracle or Configuring OJet with Active Data Guard. if using Oracle 12c or later with PDB, specify the CDB user (c##striim). Do not use a property variable for this property. |
Using Oracle native network encryption with OJet
For more information, see Configuring Oracle Database Native Network Encryption and Data Integrity.
Edit
$TNS_ADMIN\sqlnet.ora
.Add, on a line by itself:
SQLNET.ENCRYPTION_CLIENT=required/accepted
Follow that with, on a line by itself
SQLNET.ENCRYPTION_TYPES_CLIENT=(<list of types to support>)
replacing
<list of types to support>
with a comma-separated list of encryption types to support (3DES112, 3DES168, AES128, AES192, AES256, DES, DES40, RC4_128, RC4_256, RC4_40, or RC4_56). For example:SQLNET.ENCRYPTION_TYPES_CLIENT=(AES192,AES256)
Save the file.
Oracle Reader and OJet WAEvent fields
The output data type for both Oracle Reader and OJet is WAEvent.
metadata: for DML operations, the most commonly used elements are:
DatabaseName (OJet only): the name of the database
OperationName: COMMIT, BEGIN, INSERT, DELETE, UPDATE, or (when using Oracle Reader only) ROLLBACK
TxnID: transaction ID
TimeStamp: timestamp from the CDC log
TableName (returned only for INSERT, DELETE, and UPDATE operations): fully qualified name of the table
ROWID (returned only for INSERT, DELETE, and UPDATE operations): the Oracle ID for the inserted, deleted, or updated row
To retrieve the values for these elements, use the META
function. See Parsing the fields of WAEvent for CDC readers.
data: for DML operations, an array of fields, numbered from 0, containing:
for an INSERT or DELETE operation, the values that were inserted or deleted
for an UPDATE, the values after the operation was completed
To retrieve the values for these fields, use SELECT ... (DATA[])
. See Parsing the fields of WAEvent for CDC readers.
before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operation
dataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.
The following is a complete list of fields that may appear in metadata. The actual fields will vary depending on the operation type and other factors.
metadata property | present when using Oracle Reader | present when using OJet | comments |
---|---|---|---|
AuditSessionID | ✓ | Audit session ID associated with the user session making the change | |
BytesProcessed | ✓ | ||
COMMIT_TIMESTAMP | ✓ | the UNIX epoch time the transaction was committed, based on the Striim server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long | |
COMMITSCN | x | ✓ | system change number (SCN) when the transaction committed |
CURRENTSCN | ✓ | system change number (SCN) of the operation | |
DBCommitTimestamp | ✓ | the UNIX epoch time the transaction was committed, based on the Oracle server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long | |
DBTimestamp | ✓ | the UNIX epoch time of the operation, based on the Oracle server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long | |
OperationName | ✓ | ✓ | user-level SQL operation that made the change (INSERT, UPDATE, etc.) |
OperationType | ✓ | ✓ | the Oracle operation type
|
ParentTxnID | ✓ | raw representation of the parent transaction identifier | |
PK_UPDATE | ✓ | ✓ | true if an UPDATE operation changed the primary key, otherwise false |
RbaBlk | ✓ | RBA block number within the log file | |
RbaSqn | ✓ | sequence# associated with the Redo Block Address (RBA) of the redo record associated with the change | |
RecordSetID | ✓ | Uniquely identifies the redo record that generated the row. The tuple (RecordSetID, SSN) together uniquely identifies a logical row change. | |
RollBack | ✓ | 1 if the record was generated because of a partial or a full rollback of the associated transaction, otherwise 0 | |
ROWID | ✓ | see comment | Row ID of the row modified by the change (only meaningful if the change pertains to a DML). This will be NULL if the redo record is not associated with a DML. OJet: will be included only if |
SCN | ✓ | system change number (SCN) when the database change was made | |
SegmentName | ✓ | name of the modified data segment | |
SegmentType | ✓ | type of the modified data segment (INDEX, TABLE, ...) | |
Serial | ✓ | serial number of the session that made the change | |
Serial# | see comment | serial number of the session that made the change; will be included only if | |
Session | ✓ | session number of the session that made the change | |
Session# | see comment | session number of the session that made the change; will be included only if | |
SessionInfo | ✓ | Information about the database session that executed the transaction. Contains process information, machine name from which the user logged in, client info, and so on. | |
SQLRedoLength | ✓ | length of reconstructed SQL statement that is equivalent to the original SQL statement that made the change | |
SSN | ✓ | SQL sequence number. The tuple (RecordSetID, SSN) together uniquely identifies a logical row change. | |
TableName | ✓ | ✓ | name of the modified table (in case the redo pertains to a table modification) |
TableSpace | ✓ | name of the tablespace containing the modified data segment. | |
ThreadID | ✓ | ID of the thread that made the change to the database | |
Thead# | see comment | ID of the thread that made the change to the database; will be included only if | |
TimeStamp | ✓ | ✓ | the UNIX epoch time of the operation, based on the Striim server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long |
TransactionName | ✓ | ✓ | name of the transaction that made the change (only meaningful if the transaction is a named transaction) |
TxnID | ✓ | ✓ | raw representation of the transaction identifier |
TxnUserID | ✓ | ||
UserName | ✓ | name of the user associated with the operation |
OracleReader simple application
The following application will write change data for all tables in myschema
to SysOut. Replace the Username and Password values with the credentials for the account you created for Striim for use with LogMiner (see Configuring Oracle LogMiner) and myschema with the name of the schema containing the databases to be read.
CREATE APPLICATION OracleLMTest; CREATE SOURCE OracleCDCIn USING OracleReader ( Username:'striim', Password:'passwd', ConnectionURL:'203.0.113.49:1521:orcl', Tables:'myschema.%', FetchSize:1 ) OUTPUT TO OracleCDCStream; CREATE TARGET OracleCDCOut USING SysOut(name:OracleCDCLM) INPUT FROM OracleCDCStream; END APPLICATION OracleLMTest;
Alternatively, you may specify a single table, such as myschema.mytable
. See the discussion of Tables in Oracle Reader properties for additional examples of using wildcards to select a set of tables.
When troubleshooting problems, you can get the current LogMiner SCN and timestamp by entering mon <namespace>.<OracleReader source name>;
in the Striim console.
Oracle Reader example output
OracleReader's output type is WAEvent. See WAEvent contents for change data for general information.
The following are examples of WAEvents emitted by OracleReader for various operation types. Note that many of the metadata values (see Oracle Reader and OJet WAEvent fields) are dependent on the Oracle environment and thus will vary from the examples below.
The examples all use the following table:
CREATE TABLE POSAUTHORIZATIONS ( BUSINESS_NAME varchar2(30), MERCHANT_ID varchar2(100), PRIMARY_ACCOUNT NUMBER, POS NUMBER,CODE varchar2(20), EXP char(4), CURRENCY_CODE char(3), AUTH_AMOUNT number(10,3), TERMINAL_ID NUMBER, ZIP number, CITY varchar2(20), PRIMARY KEY (MERCHANT_ID)); COMMIT;
INSERT
If you performed the following INSERT on the table:
INSERT INTO POSAUTHORIZATIONS VALUES( 'COMPANY 1', 'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu', 6705362103919221351, 0, '20130309113025', '0916', 'USD', 2.20, 5150279519809946, 41363, 'Quicksand'); COMMIT;
Using LogMiner, the WAEvent for that INSERT would be similar to:
data: ["COMPANY 1","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu","6705362103919221351","0","20130309113025", "0916","USD","2.2","5150279519809946","41363","Quicksand"] metadata: "RbaSqn":"21","AuditSessionId":"4294967295","TableSpace":"USERS","CURRENTSCN":"726174", "SQLRedoLength":"325","BytesProcessed":"782","ParentTxnID":"8.16.463","SessionInfo":"UNKNOWN", "RecordSetID":" 0x000015.00000310.0010 ","DBCommitTimestamp":"1553126439000","COMMITSCN":726175, "SEQUENCE":"1","Rollback":"0","STARTSCN":"726174","SegmentName":"POSAUTHORIZATIONS", "OperationName":"INSERT","TimeStamp":1553151639000,"TxnUserID":"SYS","RbaBlk":"784", "SegmentType":"TABLE","TableName":"SCOTT.POSAUTHORIZATIONS","TxnID":"8.16.463","Serial":"201", "ThreadID":"1","COMMIT_TIMESTAMP":1553151639000,"OperationType":"DML","ROWID":"AAAE9mAAEAAAAHrAAB", "DBTimeStamp":"1553126439000","TransactionName":"","SCN":"72617400000059109745623040160001", "Session":"105"} before: null
UPDATE
If you performed the following UPDATE on the table:
UPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 5A' where pos=0; COMMIT;
Using LogMiner with the default setting Compression: false
, the WAEvent for that UPDATE for the row created by the INSERT above would be similar to:
data: ["COMPANY 5A","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null, null,null,null,null,null,null] metadata: "RbaSqn":"21","AuditSessionId":"4294967295","TableSpace":"USERS","CURRENTSCN":"726177"," SQLRedoLength":"164","BytesProcessed":"729","ParentTxnID":"2.5.451","SessionInfo":"UNKNOWN", "RecordSetID":" 0x000015.00000313.0010 ","DBCommitTimestamp":"1553126439000","COMMITSCN":726178, "SEQUENCE":"1","Rollback":"0","STARTSCN":"726177","SegmentName":"POSAUTHORIZATIONS", "OperationName":"UPDATE","TimeStamp":1553151639000,"TxnUserID":"SYS","RbaBlk":"787", "SegmentType":"TABLE","TableName":"SCOTT.POSAUTHORIZATIONS","TxnID":"2.5.451","Serial":"201", "ThreadID":"1","COMMIT_TIMESTAMP":1553151639000,"OperationType":"DML","ROWID":"AAAE9mAAEAAAAHrAAB", "DBTimeStamp":"1553126439000","TransactionName":"","SCN":"72617700000059109745625006240000", "Session":"105"} before: ["COMPANY 1","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null,null,null,null,null, null,null]
Note that when using LogMiner the before
section contains a value only for the modified column. You may use the IS_PRESENT() function to check whether a particular field value has a value (see Parsing the fields of WAEvent for CDC readers).
With Compression: true
, only the primary key is included in the before
array:
before: [null,"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null, null,null,null,null,null,null]
In all cases, if OracleReader's SendBeforeImage
property is set to False
, the before
value will be null
.
DELETE
If you performed the following DELETE on the table:
DELETE from POSAUTHORIZATIONS where pos=0; COMMIT;
Using LogMiner with the default setting Compression: false
, the WAEvent for a DELETE for the row affected by the UPDATE above would be:
data: ["COMPANY 5A","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu","6705362103919221351","0","20130309113025", "0916","USD","2.2","5150279519809946","41363","Quicksand"] metadata: "RbaSqn":"21","AuditSessionId":"4294967295","TableSpace":"USERS","CURRENTSCN":"726180", "SQLRedoLength":"384","BytesProcessed":"803","ParentTxnID":"3.29.501","SessionInfo":"UNKNOWN", "RecordSetID":" 0x000015.00000315.0010 ","DBCommitTimestamp":"1553126439000","COMMITSCN":726181, "SEQUENCE":"1","Rollback":"0","STARTSCN":"726180","SegmentName":"POSAUTHORIZATIONS", "OperationName":"DELETE","TimeStamp":1553151639000,"TxnUserID":"SYS","RbaBlk":"789", "SegmentType":"TABLE","TableName":"SCOTT.POSAUTHORIZATIONS","TxnID":"3.29.501","Serial":"201", "ThreadID":"1","COMMIT_TIMESTAMP":1553151639000,"OperationType":"DML","ROWID":"AAAE9mAAEAAAAHrAAB", "DBTimeStamp":"1553126439000","TransactionName":"","SCN":"72618000000059109745626316960000", "Session":"105"} before: null
With Compression: true
, the data
array would be:
data: [null,"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null,null,null,null,null,null,null]
Note that the contents of data
and before
are reversed from what you might expect for a DELETE operation. This simplifies programming since you can get data for INSERT, UPDATE, and DELETE operations using only the data
field.
OJet simple application
Before deploying an OJet application, note the prerequisites discussed in Runtime considerations when using OJet.
The following application will write change data for all tables in myschema
to SysOut. Replace the Username and Password values with the credentials for the account you created for Striim for use with LogMiner (see Configuring Oracle LogMiner) and myschema with the name of the schema containing the databases to be read.
CREATE APPLICATION OJetTest; CREATE SOURCE OracleCDCIn USING Ojet ( Username:'striim', Password:'passwd', ConnectionURL:'203.0.113.49:1521:orcl', Tables:'myschema.%' ) OUTPUT TO OracleCDCStream; CREATE TARGET OracleCDCOut USING SysOut(name:OracleCDCLM) INPUT FROM OracleCDCStream; END APPLICATION OJetTest;
Alternatively, you may specify a single table, such as myschema.mytable
. See the discussion of Tables in OJet properties for additional examples of using wildcards to select a set of tables.
Oracle Reader and OJet data type support and correspondence
Oracle type | TQL type when using Oracle Reader | TQL type when using OJet |
---|---|---|
ADT | not supported, values will be set to null | not supported, application will halt if it reads a table containing a column of this type |
ANYDATA, ANYDATASET, ANYTYPE | not supported; values will set to EMPTY_CLOB | not supported; application will halt if it reads a table containing a column of this type |
BFILE | not supported, values will be set to null | values for a column of this type will contain the file names, not their contents |
BINARY_DOUBLE | Double | Double |
BINARY_FLOAT | Float | Float |
BLOB | String (a primary or unique key must exist on the table) An insert or update containing a column of this type generates two CDC log entries: an insert or update in which the value for this column is null, followed by an update including the value. When reading from Oracle 19c, values for this type may be incorrect when (1) a table contains multiple columns of this type and operations are performed on more than one of those columns in the same transaction or (2) multiple tables containing columns of this type are being read and different user sessions are performing operations on them. If you encounter either of these issues, Contact Striim support for assistance. | Byte[] |
CHAR | String | String |
CLOB | string (a primary or unique key must exist on the table) An insert or update containing a column of this type generates two CDC log entries: an insert or update in which the value for this column is null, followed by an update including the value. When reading from Oracle 19c, values for this type may be incorrect when (1) a table contains multiple columns of this type and operations are performed on more than one of those columns in the same transaction or (2) multiple tables containing columns of this type are being read and different user sessions are performing operations on them. If you encounter either of these issues, Contact Striim support for assistance. | String |
DATE | DateTime | java.time.LocalDateTime |
FLOAT | String | String |
INTERVALDAYTOSECOND | string (always has a sign) | String (unsigned) |
INTERVALYEARTOMONTH | string (always has a sign) | String (unsigned) |
JSON | not supported, values will be set to null | not supported, application will halt if it reads a table containing a column of this type |
LONG | Results may be inconsistent. Oracle recommends using CLOB instead. | String |
LONG RAW | Results may be inconsistent. Oracle recommends using CLOB instead. | Byte[] |
NCHAR | String | String |
NCLOB | String (a primary or unique key must exist on the table) | String (a primary or unique key must exist on the table) |
NESTED TABLE | not supported, application will halt if it reads a table containing a column of this type | not supported, application will halt if it reads a table containing a column of this type |
NUMBER | String | String |
NVARCHAR2 | String | String |
RAW | String | Byte[] |
REF | not supported, application will halt if it reads a table containing a column of this type | not supported, application will halt if it reads a table containing a column of this type |
ROWID | String | values for a column of this type will be set to null |
SD0_GEOMETRY | SD0_GEOMETRY values will be set to null | Known issue DEV-20726: if a table contains a column of this type, the application will terminate |
TIMESTAMP | DateTime | java.time.LocalDateTime |
TIMESTAMP WITH LOCAL TIME ZONE | DateTime | java.time.LocalDateTime |
TIMESTAMP WITH TIME ZONE | DateTime | java.time.ZonedDateTime |
UDT | not supported, values will be set to null | not supported, application will halt if it reads a table containing a column of this type |
URIType, DBURIType, HTTPURIType, XDBURIType | not supported; values will be set to null | not supported; application will halt if it reads a table containing a column of this type |
UROWID | not supported, a table containing a column of this type will not be read | not supported due to Oracle bug 33147962, application will terminate if it reads a table containing a column of this type |
VARCHAR2 | String | String |
VARRAY | Supported by LogMiner only in Oracle 12c and later. Required Oracle Reader settings:
Limitations:
When the output of an Oracle Reader source is the input of a target using XML Formatter, the formatter's Format Column Value As property must be set to | known issue DEV-29799: if a table contains a column of this type, the application will terminate |
XMLTYPE | Supported only for Oracle 12c and later. When DictionaryMode is OnlineCatalog, values in any XMLType columns will be set to null. When DictionaryMode is OfflineCatalog, reading from tables containing XMLType columns is not supported. | String |
Target data type support & mapping for Oracle sources
The table below details how Striim maps the data types of an Oracle source to the data types of a target such as Azure Synapse, BigQuery, Databricks, and so on when you create an application using a wizard with Auto Schema Creation, perform an initial load using Database Reader with Create Schema enabled, or run the schema conversion utility, or when Striim schema evolution creates or alters target tables.
Oracle source types ADT, ARRAY, BFILE, LONG, LONG RAW, NESTED TABLE, REF, ROWID, SD0_GEOMETRY, UDT, and UROWID are not supported. Oracle suggests using CLOB in place of LONG or LONG RAW.
If your screen is not wide enough to display the whole table, click in a cell and drag left to view the right-hand columns.
Oracle Data Type | Azure Synapse | BigQuery | Databricks | Db2 for z/OS | Fabric Mirror | MariaDB | MySQL | Oracle | PostgreSQL | Snowflake | Spanner | Spanner PG Dialect | SQL Server | Yellowbrick |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BFILE | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
BINARY_DOUBLE | float(53) | float64 | double | double | bytes | double | double | binary_double | double precision | double precision | FLOAT64 | float8 | float(53) | double precision |
BINARY_FLOAT | float(53) | float64 | float | float | float | decimal(65) | decimal(65) | binary_float | double precision | float | FLOAT64 | float8 | float(53) | double precision |
BLOB | varbinary(8000) | bytes(65535) | binary | blob(2147483647) | bytes | longblob | longblob | BLOB | bytea | BINARY | BYTES(MAX) | bytea | varbinary(max) | character varying(p) |
CHAR | character(p) | string | string | character(p) | string | character(p) | character(p) | character(p) | character(p) | character(p) | STRING(p) | varchar(p) | character(p) | character(p) |
CHAR(p) | character(p), if (p) <= 8000 varchar(8000), if (p) > 8000* | string | string | character(p), if (p) <= 255 clob(p), if 255 <= (p) <= 2147483647 | string | character(p), if (p) <= 255 longtext, if 255 <= (p) <= 2147483647 | character(p), if (p) <= 255 longtext, if 255 <= (p) <= 2147483647 | clob, if (p) > 2000* character(p), if (p) <= 2000 | character(p), if (p) <= 10485760 text, if (p) > 10485760* | character, if (p) > 16777216* character(p), if (p) <= 16777216 | STRING(p), if (p) <= 2621440 | varchar(p), if (p) <= 1048576 text, if (p) > 1048576* | character(p), if (p) <= 8000 varchar(max), if (p) > 8000* | character(p), if (p) <= 64000 |
CLOB | varchar(8000) | string | string | clob | string | LONGTEXT | LONGTEXT | clob | text | VARCHAR(16777216) | STRING(MAX) | text | varchar(max) | character varying(p) |
DATE | datetime2 | datetime | date | timestamp | date | datetime | datetime | date | timestamp without time zone | timestamp | TIMESTAMP | timestamptz | datetime2 | timestamp |
FLOAT | varchar(8000) | float64 | float | varchar(p) | double | double | double | float(p) | double precision | float | FLOAT64 | float8 | varchar(8000) | double precision |
FLOAT(p) | float(p), if (p) <= 53 varchar(8000), if (p) > 53* | float64, if (p) <= 308, if (s) <= 16 | float | varchar(p), if (p) > 76* real, if (p) <= 76 | float, if (p) <= 10 double, if (p) > 10 | float(p), if (p) <= 23 double, if (p) > 23* | float(p), if (p) <= 23 double, if (p) > 23* | float, if (p) > 126* float(p), if (p) <= 126 | real, if (p) <= 8 double precision, if (p) > 8 | float, if (p) <= 38 | STRING(MAX), if (p) > 308* STRING(MAX), if (s) > 15* FLOAT64, if (p) <= 308, if (s) <= 15 | float8, if (p) > 309 float8, if 38 <= (p) <= 309 float4, if (p) <= 38 | float(p), if (p) <= 53 varchar(8000), if (p) > 53* | double precision, if (p) > 24* real, if (p) <= 24 |
INTERVAL DAY TO SECOND | varchar(150) | string | string | varchar(150) | string | varchar(150) | varchar(150) | interval day(p) to second(s) | character varying | varchar(150) | STRING(150) | varchar(150) | varchar(150) | character varying(150) |
INTERVAL YEAR TO MONTH | varchar(150) | string | string | varchar(150) | string | varchar(150) | varchar(150) | interval year(p) to month | character varying | varchar(150) | STRING(150) | varchar(150) | varchar(150) | character varying(150) |
LONG | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
LONG RAW | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
NCHAR(p) | nchar(p), if (p) <= 4000 nvarchar(4000), if (p) > 4000* | string | string | character(p), if (p) <= 255 clob(p), if 255 <= (p) <= 2147483647 | string | nchar(p), if (p) <= 255 nvarchar(p), if 255 <= (p) <= 65535 | nchar(p), if (p) <= 255 nvarchar(p), if 255 <= (p) <= 65535 | nchar(p), if (p) <= 1000 nclob, if (p) > 1000* | character(p), if (p) <= 10485760 text, if (p) > 10485760* | character, if (p) > 16777216* character(p), if (p) <= 16777216 | STRING(p), if (p) <= 2621440 | varchar(p), if (p) <= 1048576 text, if (p) > 1048576* | nchar(p), if (p) <= 4000 nvarchar(max), if (p) > 4000* | character(p), if (p) <= 64000 |
NCLOB | nvarchar(p) | string | string | clob | string | LONGTEXT | LONGTEXT | nclob | text | VARCHAR | STRING(p) | text | nvarchar(p) | character varying(p) |
NUMBER | float(53) numeric(p,s) | bignumeric | decimal(38) decimal(p,s) | numeric | string decimal | decimal(p,s) float(53) | decimal(p,s) float(53) | number(p,s) | numeric(p,s) | numeric(p,s) | NUMERIC | numeric | float(53) numeric(p,s) | decimal(p,s) |
NUMBER(p,0) | numeric(p,s), if (p) <= 38, if (s) <= 38 | numeric, if (p) <= 29, if (s) <= 9 | decimal(p,s), if (p) <= 38, if (s) <= 37 | numeric(p,s), if (p) <= 31, if (s) <= 30 | decimal, if (p) <= 38 | decimal(p,s), if (p) <= 65, if (s) <= 30 | decimal(p,s), if (p) <= 65, if (s) <= 30 | number(p,s), if (p) <= 38, if (s) <= 127 | numeric(p,s), if (p) <= 1000, if (s) <= 1000 | numeric(p,s), if (p) <= 38, if (s) <= 37 | NUMERIC, if (p) <= 38, if (s) <= 9 | numeric, if (p) <= 131072, if (s) <= 16384 | numeric(p,s), if (p) <= 38, if (s) <= 38 | decimal(p,s), if (p) <= 38, if (s) <= 38 |
NUMBER(p,s) | varchar(8000), if (p,s) > 38, if (s) > 38* numeric(p,s), if (p) <= 38, if (s) <= 38 | string, if (p,s) > 39, if (s) > 38* numeric, if (p) <= 29, if (s) <= 9 | string, if (p,s) > 38, if (s) > 37* decimal(p,s), if (p) <= 38, if (s) <= 37 | numeric(p,s), if (p) <= 31, if (s) <= 30 numeric, if (p,s) > 31, if (s) > 30* | string, if (p,s) > 38* decimal, if (p) <= 38 | decimal(p,s), if (p) <= 65, if (s) <= 30 TEXT, if (p,s) > 65, if (s) > 30* | decimal(p,s), if (p) <= 65, if (s) <= 30 TEXT, if (p,s) > 65, if (s) > 30* | number, if (s) > 127* number, if (p,s) > 38* number(p,s), if (p) <= 38, if (s) <= 127 | double precision, if (s) > 1000 double precision, if (p,s) > 1000 numeric(p,s), if (p) <= 1000, if (s) <= 1000 | numeric(p,s), if (p) <= 38, if (s) <= 37 VARCHAR, if (p,s) > 38, if (s) > 37* | NUMERIC, if (p) <= 38, if (s) <= 9 STRING(MAX), if (p,s) > 308, if (s) > 15* | numeric, if (p) <= 131072, if (s) <= 16384 | varchar(8000), if (p,s) > 38, if (s) > 38* numeric(p,s), if (p) <= 38, if (s) <= 38 | character varying, if (p,s) > 38, if (s) > 38* decimal(p,s), if (p) <= 38, if (s) <= 38 |
NVARCHAR2(p) | nvarchar(p), if (p) <= 4000 nvarchar(4000), if (p) > 4000* | string | string | varchar(p), if (p) <= 4045 clob(p), if 4045 <= (p) <= 2147483647* | string | longtext, if (p) > 65535* nvarchar(p), if (p) <= 65535 | longtext, if (p) > 65535* nvarchar(p), if (p) <= 65535 | nclob, if (p) > 4000* NVARCHAR2(p), if (p) <= 4000 | character varying(p), if (p) <= 10485760 character varying, if (p) > 10485760* | VARCHAR, if (p) > 16777216* VARCHAR(p), if (p) <= 16777216 | STRING(MAX), if (p) > 2621440 STRING(p), if (p) <= 2621440 | varchar(p), if (p) <= 1048576 text, if (p) > 1048576* | nvarchar(p), if (p) <= 4000 nvarchar(max), if (p) > 4000* | character varying, if (p) > 64000* character varying(p), if (p) <= 64000 |
RAW(p) | varbinary(8000), if (p) > 8000* varbinary(p), if (p) <= 8000 | bytes, if (p) > 9223372036854775807 bytes(p), if (p) <= 9223372036854775807 | binary | blob, if (p) > 1011 varbinary(4046), if (p) <= 1011 | bytes | longblob, if (p) <= 2147483647 | longblob, if (p) <= 2147483647 | BLOB, if (p) > 2000 RAW(p), if (p) <= 2000 | bytea, if (p) <= 2147483647 | BINARY, if (p) <= 8388608 BINARY, if (p) > 8388608* | BYTES(MAX), if (p) > 10485760 BYTES(p), if (p) <= 10485760 | bytea, if (p) <= 1048576 | varbinary(p), if (p) <= 8000 varbinary(max), if (p) > 8000* | character varying(p), if (p) <= 64000 |
ROWID | varchar(150) | string | string | varchar(150) | string | varchar(150) | varchar(150) | ROWID | character varying(150) | varchar(150) | STRING(150) | varchar(150) | varchar(150) | character varying(150) |
SDO_GEOMETRY | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
TIMESTAMP | datetime2 | timestamp | timestamp | timestamp | timestamp-millis | datetime | datetime | timestamp(s) | timestamp(s) without time zone | timestamp | TIMESTAMP | timestamptz | datetime2 | timestamp |
TIMESTAMP WITH LOCAL TIME ZONE | datetime2 | timestamp | timestamp | timestamp | local-timestamp-millis | datetime(s) | datetime(s) | timestamp(s) with local time zone | timestamp(s) without time zone | timestamp | TIMESTAMP | timestamptz | datetime2 | timestamp |
TIMESTAMP WITH LOCAL TIME ZONE(p) | datetime2 | timestamp, if (s) <= 6 | timestamp | timestamp | local-timestamp-millis | datetime(s), if (p) <= 19, if (s) <= 6 datetime, if (p) > 19, if (s) > 6* | datetime(s), if (p) <= 19, if (s) <= 6 datetime, if (p) > 19, if (s) > 6* | timestamp with local time zone, if (s) > 9* timestamp(s) with local time zone, if (s) <= 9 | timestamp without time zone, if 29 <= (p) <= 29, if (s) > 6* timestamp(s) without time zone, if (p) <= 29, if (s) <= 6 | timestamp | TIMESTAMP | timestamptz, if 36 <= (p) <= 36* timestamptz, if (p) <= 36, if (s) <= 6 | datetime2 | timestamp |
TIMESTAMP WITH TIME ZONE | datetimeoffset | timestamp | timestamp | timestamp with time zone | string | datetime(s) | datetime(s) | timestamp with time zone | timestamp(s) with time zone | timestamp with time zone | TIMESTAMP | timestamptz | datetimeoffset | timestamptz |
TIMESTAMP WITH TIME ZONE(p) | datetimeoffset | timestamp, if (s) <= 6 | timestamp | timestamp with time zone | string | datetime(s), if (p) <= 19, if (s) <= 6 datetime, if (p) > 19, if (s) > 6* | datetime(s), if (p) <= 19, if (s) <= 6 datetime, if (p) > 19, if (s) > 6* | timestamp with time zone | timestamp(s) with time zone, if (p) <= 35, if (s) <= 6 timestamp with time zone, if 35 <= (p) <= 35, if (s) > 6* | timestamp with time zone | TIMESTAMP | timestamptz, if 36 <= (p) <= 36* timestamptz, if (p) <= 36, if (s) <= 6 | datetimeoffset | timestamptz |
TIMESTAMP(p) | datetime2 | timestamp, if (s) <= 6 | timestamp | timestamp | timestamp-millis | datetime | datetime | timestamp(s), if (s) <= 9 timestamp, if (s) > 9* | timestamp without time zone, if 29 <= (p) <= 29, if (s) > 6* timestamp(s) without time zone, if (p) <= 29, if (s) <= 6 | timestamp | TIMESTAMP | timestamptz, if 36 <= (p) <= 36* timestamptz, if (p) <= 36, if (s) <= 6 | datetime2 | timestamp |
UROWID | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
VARCHAR2(p) | varchar(p), if (p) <= 8000 varchar(8000), if (p) > 8000* | string | string | varchar(p), if (p) <= 4045 clob(p), if 4045 <= (p) <= 2147483647* | string | varchar(p), if (p) <= 65535 longtext, if 65535 <= (p) <= 2147483647* | varchar(p), if (p) <= 65535 longtext, if 65535 <= (p) <= 2147483647* | clob, if (p) > 4000* VARCHAR2(p), if (p) <= 4000 | character varying(p), if (p) <= 10485760 character varying, if (p) > 10485760* | VARCHAR, if (p) > 16777216* VARCHAR(p), if (p) <= 16777216 | STRING(MAX), if (p) > 2621440 STRING(p), if (p) <= 2621440 | varchar(p), if (p) <= 1048576 text, if (p) > 1048576* | varchar(max), if (p) > 8000* varchar(p), if (p) <= 8000 | character varying, if (p) > 64000* character varying(p), if (p) <= 64000 |
XMLTYPE | varchar(8000) | string | string | xml | string | LONGTEXT | LONGTEXT | xmltype | xml | varchar | STRING(MAX) | varchar(p) | xml | character varying |
*When using the schema conversion utility, these mappings appear in converted_tables_with_striim_intelligence.sql.