Organizations that opt to build their own EDA will face numerous challenges related to integration, deployment, performance, scalability, security, and more.
Many modern applications are event-driven. They use real-time data, which is analyzed to gain insights that, in turn, trigger some action. Examples abound in applications ranging from customer engagement to logistics planning to financial transaction processing. Increasingly, such applications need to be developed and run on an event-driven architecture (EDA).
Event-driven applications must accommodate streaming data and deliver insights in near-real-time. They often need an event-driven architecture (EDA) to ensure performance and accommodate the different elements that typically comprise such applications.
Intermingled with the need for an EDA is the fact that many organizations have adopted a cloud-native approach to support the development and deployment of their applications. In particular, application architectures are evolving to a model of distributed, modular, and portable components that can be easily deployed and run across cloud infrastructure. Companies can run those elements on-premises as well as on public or private clouds.
See also: Why Modern Applications Need an Event-driven Architecture
Such a hybrid cloud approach reduces infrastructure costs and increases the operational efficiency of applications. EDAs are ideal for hybrid cloud applications. In such an application, a microservice asynchronously publishes an event when the entity changes. Other microservices subscribe to those events and update their own entities, leading to subsequent events being published. Because event-driven systems are asynchronous, they are generally more responsive than traditional architectures and can be activated by triggers from incoming events. Such an architecture enables loose coupling among services because the event producer does not need to know about the consumers or how the event will be processed.
Additionally, EDAs address the limitations of commonly used synchronous application programming interfaces (APIs) because they provide asynchronous communication and reactive programming approaches for effective fault tolerance in highly distributed microservices architectures.
EDAs are based on asynchronous, non-blocking communication and can release resource usage rather than wait for a response to get back. This is especially important for cloud- and container-native development, which demands high agility and flexibility from scalable, distributed cloud microservices environments.
An EDA is made up of event producers and event consumers. An event producer detects or senses an event and represents the event as a message. It does not know the consumer of the event or the outcome of an event.
After an event has been detected, it is transmitted from the event producer to the event consumers through event channels, where an event processing platform processes the event asynchronously. Event consumers need to be informed when an event has occurred.
As such, an EDA greatly enhances decoupling from the communication standpoint by using sender/publisher and receiver/subscriber objects. Because multiple subscribers can receive events simultaneously, the system can have lower latency and higher throughput with the right events medium. By subscribing to events, the system can react in real time and become more aware of the surrounding incidents.
Points to consider when building your own EDA
Many event-driven applications use Apache Kafka as a distributed data streaming platform for event processing. Sitting between event producers and consumers, Kafka acts as an event broker. It can handle publishing, subscribing to, storing, and processing event streams in real time. Kafka also offers fault tolerance, high throughput, and low latency. Kafka is often used in conjunction with the container management system Kubernetes. It helps with the deployment and operations of containers across hosts.
Organizations that decide to use open-source Kafka and Kubernetes as the foundation of their EDA will need properly trained staff to manage the process and will need to dedicate resources to the undertaking.
To start, the staff must integrate the various components (container orchestrator, event-processing engine, etc.) into a cohesive platform upon which event-driven applications can be deployed. Additionally, the staff must handle many tasks and chores related to installation, configuration, and upgrades. In most cases, companies use a variety of third-party tools to help. But the use of such tools requires training and time for administration.
Another issue to consider is that there may not be the needed internal expertise to carry out the basic operations needed to implement an EDA. Staff must be familiar with techniques and best practices related to integration, fine-tuning performance, security, and more. Even with such skills, developers will need to spend time on processes and tasks that are not directly related to their main jobs.
One additional issue that needs to be addressed is how to scale any deployment. It is one thing to develop and pilot an application, but quite another to ramp it up for use in production. An event-driven application for a global organization needs resilient infrastructure that can be rapidly throttled up as use grows and more users access the application.
Given these issues, many companies are looking for alternatives to a do-it-yourself approach to EDA. As has been the case for years, the benefits of open-source solutions can be complemented by turning to a technology partner with the expertise and enterprise-class features that supplement the core open source offerings. Typically, a technology partner solution builds in and addresses the integration, performance, fine-tuning, scalability, and security aspects of using the technology that do-it-yourselfers would spend lots of time on themselves.
In the case of EDA, a leading option for a technology partner is Red Hat. It offers AMQ Streams, with simplified deployment on Red Hat OpenShift, for event-driven companies that want to use Kafka. It is a massively scalable, distributed, and high-performance data streaming platform based on Kafka. It offers a distributed backbone that allows microservices and other applications to share data with high throughput and low latency. Additionally, AMQ Streams includes pre-built container images for Apache Kafka and Zookeeper to streamline deployment, as well as operators for managing and configuring Apache Kafka clusters, topics, and users on top of Kubernetes.