Sponsored by Red Hat
Event-Driven Architecture for the cloud
Learn More

Why Managed Event Capabilities are so Useful for Flexibility and Agility

PinIt

Managed event services can help companies make faster and more efficient use of events and speed the adoption of event-driven architectures.

Companies today must be more engaging and dynamic. Increasingly, events data is playing a larger role in how they conduct business, run their operations, and interact with their customers. Making use of events data and setting up an event-driven architecture can be challenging.

RTInsights recently sat down with Duncan Doyle, Senior Product Manager at Red Hat. We discussed these challenges, how managed event services can help, and some of the use cases he’s seen when companies can more easily implement and use event capabilities. Here is a summary of our conversation. 

RTInsights: What are the challenges companies face when using or incorporating events data into their operations?

Doyle: Many companies are using or looking at event streaming data and event-driven architectures (EDAs), which have been around for some time. In 2006, for example, many companies implemented EDA with products like MQ Series from IBM. Since then, technology has evolved, and new platforms have emerged.

We’ve seen an increasing amount of data being generated, and handling it has become a problem, pushing companies to look at EDA [event-driven architecture]. To get started, most begin with an event streaming platform. But it needs to be managed, tuned for performance, scaled, and have high availability if it is running important workloads.

So, what companies see is that while the architecture and the concept are promising, the actual journey of implementing it is a lot more difficult. It also has to work with all the processes and things that have been standardized within the enterprise. How does it connect to an existing monitoring system? Do developers understand how event-driven systems need to be built because the paradigm is quite different from what was used before?

The main challenge is that companies have to know where to begin. A company doesn’t want to deal with all the startup things like all the low-level stuff from an infrastructure perspective, high availability, and backup. Addressing these issues requires expertise in event-driven technology and consumes resources.

RTInsights: How can managed events services help?

Doyle: Companies that opt for a managed service don’t have the burden of having to learn such technologies. Removing the need for that low-level knowledge and experience allows organizations to realize the benefits of EDA faster than if they have to manage that all by themselves.

Security is another aspect to consider. Companies have the assurance that a managed service has gone through several surveys and penetration tests. They can be sure that stability testing has been done. So, a company using a managed service knows there is a certain level of security associated with the offering.

Scalability is another way that managed events services help. Companies don’t have to learn how to scale their implementation. The provider handles the scalability. And hopefully, the provider also provides guidance on how to use that managed service in the most scalable way for specific use cases.

Cost is another way managed events services can help. Managed services are not necessarily cheaper but may lower overall costs. For example, companies don’t have to educate their workforce or hire new employees to work on these managed services. They can start with a small Kafka cluster and evaluate it. There is no upfront cost to get to that point. In contrast, going a do-it-yourself route requires educating people, putting a lot of effort into the start of the journey, and then figuring out if it’s worth it.

The main point is that using a managed service, companies remove the operations part from the equation and can focus on the things they want to build on top of event services.

RTInsights: What does Red Hat offer in this area?

Doyle: Red Hat is known for RHEL or Red Hat Enterprise Linux. And nowadays, the core cloud platform of our company is OpenShift, which is our Kubernetes distribution for enterprises. We offer OpenShift as a managed service on various cloud providers.

In the context of event-driven architecture, companies want to run event consumers and producers, apply logic to the events, and do reasoning using analysis. Those are all workloads that need to run somewhere, and a container platform like OpenShift is an ideal environment to do that.

Running EDA workloads on OpenShift gives companies intrinsic scalability at the container level within the cloud. Alternatively, a company can run it in a data center if they want to manage it on their own.

OpenShift is the cornerstone of most of the things we do. This year we’ve launched a new set of managed cloud services that are natively integrated with OpenShift offering a streamlined user experience regardless of the cloud vendor of your choice.

In 2021, Red Hat introduced three new managed cloud services, including Red Hat OpenShift API Management, which is a fully managed API management platform; Red Hat OpenShift Streams for Apache Kafka, a fully managed Kafka service; and Red Hat OpenShift Data Science, fully managed containerized machine learning models.

Red Hat OpenShift Streams for Apache Kafka is a managed streaming platform based on Apache Kafka that allows companies to run a Kafka service in the cloud, managed by us. A company just has to go to our console in cloud.redhat.com, request a Kafka instance, and we’ll provision it for them. They get a full Kafka cluster. So not a Kafka-as-a-Service or anything like that. They get a full Kafka cluster, which they can use to run their Kafka workloads on.

They are already several cloud vendors that provide a managed Kafka service. One thing that we’re aiming to do ties back to addressing the challenges that enterprises have with doing EDA. It’s not so much about Kafka. Kafka is an important piece of the puzzle, but the real value that we want to add on top of that the Kafka service integrates in the best way possible with a container platform like OpenShift. So, we can focus on the workloads developed by users that produce and consume events. That’s where we want to differentiate from other vendors that offer a Kafka service but are more on a pure-play strategy.

RTInsights: How is Red Hat planning to extend the Kafka ecosystem?

Doyle: With our service, we want to focus on the developer ecosystem and the workload ecosystem on containers with OpenShift. OpenShift Streams for Apache Kafka and OpenShift API Management are the first services that we are providing as a managed service. We’re also working on others, starting with the ones in the Kafka ecosystem but planning on expanding that field. Some of the major ones are:

Service Registry is a managed service based on the upstream Apicurio project, which is somewhat like Copeland’s schema registry. It gives companies a managed environment to upload, lifecycle manage, and govern the schemas used by events.

The issue that it addresses involves data structures and data types. Event consumers and producers need to adhere to certain data structures to produce and consume events as a client. That’s the case even when schemas are evolving, and those things are evolving all the time. There are always new requirements where a company has to add new fields in their data, accommodate new cases, or add capabilities within other parts of the enterprise. Any of those happenings force changes to data schemas and structures.

With Service Registry, the platform provides governance and life cycle management over schema. So, a company knows the format of the data that’s being pushed to event consumers and when a consumer is incapable of consuming those events because they don’t comply with the schema.

If a dynamic consumer or dynamic client is used, that entity can download the new version of the schema. And that new version provides a proper way to deal with the data structure and lifecycle governance within an architecture.

A service based on our connectors technology: We have Red Hat Integration, which is based on the well-known and very popular community project called Apache Camel. We’re bringing those Camel connectors to the managed environment, as well. Starting with connectors for Kafka and other more important systems, we will get events from third-party systems and put them into Kafka. We’ll also consume these events from Kafka with these connectors and push them to other systems. And in the process, we can apply logic like transformations for routing and all those kinds of things.

Change data capture: Still another interesting area comes from the perspective of, “Hey, you never start with a greenfield,” is a project called Debezium for change data capture. It allows companies to get changes in a database and make these changes available as events are being pushed to Kafka.

RTInsights: Could you share some examples of how customers are using the managed event services?

Doyle: There are some in different areas.

For FSI [financial services institutions], we’ve seen a lot of interest in organizations wanting to develop applications that can predict a “next best action.” Anytime a customer does something, whether buying a new product, entering a bank, placing a support request, or doing something in their app or store, they want to grasp those events and make sure that they hold the customer’s attention. They also want to identify what they need to do to serve that customer more personally.

Those event streams are super important when a company wants to take actions in real time. Suppose somebody is doing something on an app and a company has a new offer that could suit them better or know that their financial status is X. In that case, the user can immediately be provided with a suggestion. An action might be to show them, “Hey, you’re now doing this, but maybe you can do this thing cheaper because we’ve got this new product over here that might be of interest.” Or, “Hey, you opened a bank account, but now I see that your son or daughter just turned 12, and we’ve got this great offering for this child account. And if you apply for that, we give you a discount on this one that you have, and you can have a family package.” It’s that kind of traditional stuff that can be supported with event services. Insurance companies and banks are not there yet. But they’re looking at investing in those kinds of technologies.

Manufacturing is another area where there is interest in managed event services. We’ve had discussions and conversations with automotive and car parts manufacturing companies. They were using events and event-driven architectures to recognize changes and anomalies in their manufacturing process.

For example, when a machine creates a part and the time to create that part starts increasing above a certain threshold, they were sending out events into their systems to recognize and signal, “Hey, we need to proactively service that machine now.” So, they are working on improving their manufacturing process and making sure that nothing breaks down. They don’t want to end up with a manufacturing line that doesn’t work anymore and must be taken out of service to repair the problem. Events are used to increase efficiency and throughput by doing proactive maintenance.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published.