Striim Platform 4.1: Another big step forward

Table of Contents

We are pleased to announce the release of Striim Platform 4.1, the latest version of Striim’s flagship real-time streaming and data integration platform.  Our releases incorporate feedback from our customers in terms of new features, enhancements to existing features, and bug fixes.  We have centered Striim 4.1. around the themes of scalability, performance, and automation.  

We have introduced 3 new data adapters and 1 new parser in Striim 4.1 to support customers’ high-performance applications and workflows that process large volumes of data. With these new adapters and parsers, Striim now supports over 125 types of readers and writers.

  1. OJet reader for Oracle:  Ojet is Striim’s next-generation high-performance Oracle adapter that can read up to 150+ gigabytes of data per hour from Oracle databases (up to version-21c).  OJet is the highest-performing Oracle CDC reader today. We tested OJet to be able to read 3 billion events per day from Oracle and write to Google BigQuery with an average end-to-end latency of 1.9 seconds. With an average event size of 1.3 KB, this means that OJet read 3.8 TB of data per day. We have designed OJet for efficiency: in our tests, OJet resulted in a mere 43% CPU utilization across 8 cores.    
  2. Azure Cosmos DB reader:  Microsoft Azure Cosmos DB is a fully-managed NoSQL database service for modern application development. Striim introduces a new adapter to ingest data using change streams from Azure Cosmos DB with the SQL API or the MongoDB API. You can now use Striim to read real-time data from operational applications running on Cosmos DB, and write to their preferred datawarehouse, such as Azure Synapse, Snowflake, or Google BigQuery to gain visibility into their operational data. 
  3. Databricks Delta Lake writer:   Stiim now supports real-time integration to Databricks Delta Lake, a long-requested feature by our customers. Delta Lake can improve the reliability of data lakes by providing additional capabilities such as ACID transactions, scalable metadata handling, and unified stream and batch data processing. You can now use the Databricks Delta Lake writer to build your real-time SQL analytics, real-time monitoring, and real-time machine-learning workflows.   
  4. Parquet parser:  Apache Parquet is a column storage file format that is popular in the data engineering and AI/ML ecosystems. You can now read data in Parquet format from supported sources such as Amazon S3 or distributed file systems such as the Hadoop Distributed File System, thus enabling real-time integration and analytics with your big data applications. 

In addition, we have also enhanced our existing readers and writers.  We have updated our Salesforce reader to support the latest Salesforce API (v51), and to read custom and multi-objects. We now support Kerberos-based authentication when reading from Oracle and PostgreSQL databases, and merge operations with Microsoft Azure Synapse

Striim 4.1 offers enhanced operational and management enhancements for our customers that have deployed Striim on a single or multiple nodes. We support smart application rebalance by monitoring the compute resources consumed by Striim applications, and, in the event of a node going down, distributing Striim applications among the existing nodes. Striim can detect when the node rejoins the cluster, and it can redistribute Striim applications to balance the load among all online nodes. This maximizes operational uptime, reduces manual intervention, and provides improved scalability and cluster performance for our customers. 

Data observability and data traceability are emerging patterns among enterprise customers.  When dealing with data integration at scale across multiple teams, and hundreds to thousands of users, enterprise customers often ask where a data entry or data field originated.  We are the first data streaming platform to natively support data streaming lineage functions.  Striim can send your application metadata to your chosen data warehouse or analytical system. You can then use a data governance tool to know about all Striim components that process your data as the data moves from source to target. 

With Striim 4.1, we support emerging workload patterns and collaboration among developers and database administrators by sending real-time alerts to Slack channels, thus enabling them to monitor and react to their data pipelines in real-time. Additionally, customers can build on Slack’s integrations with enterprise tools such as ServiceNow or PagerDuty to automatically create IT tickets based on the incoming alert message.

These are just a few of the major new features that are part of Striim 4.1. To hear more about Striim 4.1, you can watch a LinkedIn Live recording from the recent launch.  You can also visit the Striim User Guide for a full list of new features included in the release, as well as the list of customer-reported issues that are fixed with this release.  

To get started with Striim 4.1, visit https://www.striim.com/.