Confluent Premium Connector Integrates Oracle and Kafka

PinIt

The Confluent Premium Connector captures changes in Oracle databases to modernize data warehouses and facilitate data synchronization and real-time analytics.

Confluent announced today is has added a Premium Connector for Oracle Change Data Capture (CDC) Source to its portfolio as part of an effort to bridge the divide between the open-source Apache Kafka event streaming platform and the most widely employed relational database in the enterprise.

Apache Kafka software is playing an increasingly central role in digital business transformation initiatives because it enables IT organizations to more easily store data that can be employed to drive a wide variety of real-time applications.

While connectors for integrating Kafka with legacy databases already exist, Confluent as a provider of a curated instance of Kafka for enterprise IT organizations dubbed ksqlDB decided to build a premium connector to address the need to capture data as it is being added, updated, or removed from an Oracle database, says Diby Malakar, senior director of product management at Confluent.

See also: Real-Time Technology Trends That Will Drive 2021 Innovation

Rather than requiring enterprise IT organizations to build those connectors on their own using low-level application programming interfaces (APIs), Malakar says Confluent has decided to build and maintain them on behalf of customers that would otherwise have to dedicate as much as 24 months of engineering time to design, test, and maintain a connector with similar capabilities. “We pre-built the connectors for them,” says Malakar.

Confluent expects to make additional connectors for data sources available as part of an effort to break down the silos between Kafka and various data silos strewn across the enterprise, adds Malakar.

In the case of the Premium Connector for Oracle CDC Source, a development team can now securely capture changes happening in Oracle databases and store them as different Kafka topics to facilitate data synchronization, real-time analytics, and modernize data warehouses. Other use cases include protecting high-value, transactional data stored in Oracle databases by backing up the redo log to a distributed, fault-tolerant system in Kafka and enabling real-time processing of the latest change events in Oracle in a way that synchronizes with the current state of each Oracle table.

IT teams can also more easily identify which Oracle record has been changed by the primary key and prevent unauthorized access to Oracle change data by separating each table’s change events into different Kafka topics.

In general, the need to process data in real-time is becoming more critical as organizations look to process and analyze data closer to the point where it has been created. Most existing business processes are based on batch-oriented applications that were never designed to be updated in near real-time. Platforms such as Kafka along with other event-driven platforms such as serverless computing frameworks make it possible to add a layer of infrastructure for processing data in real time alongside legacy platforms based on relational databases on which batch-oriented applications have been constructed.

Of course, there may come a day when all those batch-oriented applications are eventually replaced. However, given the costs involved that might take years. It’s simply cost prohibitive for organizations to replace hundreds of legacy applications. The next best thing, however, is to add a layer of infrastructure capable of driving real-time processing of data in a way that enables bi-directional updates with those legacy applications.

Leave a Reply

Your email address will not be published. Required fields are marked *