Connect to Snowflake
If you have already created one or more pipelines, you can select an existing Snowflake connection to write to the same Snowflake database. When you create your first pipeline, or if you want to write to a different Snowflake database, you must enter the following connection details See Configure Snowflake for details on creating the referenced objects in Snowflake.
Host: your Snowflake account identifier
Username: the Snowflake user ID Striim will use to connect
Password: the password for the specified user ID
Database: the existing Snowflake database that Striim will write to
Role: a role associated with the specified user ID that has the privileges required to use the specified database and warehouse
Warehouse: an existing Snowflake warehouse (leave blank to use the default warehouse for the specified user)
JDBC URL Params (optional): Specify any additional JDBC connection parameters required to connect to your Snowflake instance (see Docs » Connecting to Snowflake » Connectors & Drivers » JDBC Driver » Configuring the JDBC Driver). Separate multiple parameters with
&
, for exampleuseProxy=true&proxyHost=198.51.100.0&proxyPort=3128&proxyUser=example&proxyPassword=******
How do you want to write continuous changes to Snowflake?
Write continuous changes as audit records (default; also known as APPEND ONLY mode): Snowflake retains a record of every operation in the source. For example, if you insert a row, then update it, then delete it, Snowflake will have three records, one for each operation in the source (INSERT, UPDATE, and DELETE). This is appropriate when you want to be able to see the state of the data at various points in the past, for example, to compare activity for the current month with activity for the same month last year.
With this setting, Striim will add two additional columns to each table, STRIIM_OPTIME, a timestamp for the operation, and STRIIM_OPTYPE, the event type, INSERT, UPDATE, or DELETE. Note: on initial sync with SQL Server, all STRIIM_OPTYPE values are SELECT.
Write continuous changes directly (also known as MERGE mode): Snowflake tables are synchronized with the source tables. For example, if you insert a row, then update it, Snowflake will have only the updated data. If you then delete the row from the source table, Snowflake will no longer have any record of that row.
Which method would you like to use to write continuous changes to Snowflake?
Streaming: Write from staging to Snowflake using Snowpipe (see Docs » Loading Data into Snowflake » Loading Continuously Using Snowpipe » Introduction to Snowpipe). If you choose this option, you must upload the private key created as described in Configure Snowflake.
File upload: Write from staging to Snowflake using bulk loading (see Docs » Loading Data into Snowflake » Bulk Loading Using COPY). With the default setting, Local, the staging area is a Snowflake internal named stage (see Docs » Loading Data into Snowflake » Bulk Loading Using COPY » Bulk Loading from a Local File System » Choosing an Internal Stage for Local Files). Set to Amazon S3 to use that as the staging area instead, in which case specify the following properties:
property
type
default value
notes
S3 Access Key
String
an AWS access key ID (created on the AWS Security Credentials page) for a user with read and write permissions on the bucket (leave blank if using an IAM role)
S3 Bucket Name
String
Specify the S3 bucket to be used for staging. If it does not exist, it will be created.
S3 IAM Role
String
an AWS IAM role with read and write permissions on the bucket (leave blank if using an access key)
S3 Region
String
the AWS region of the bucket
S3 Secret Access Key
encrypted password
the secret access key for the access key