A Perfect Pairing: EDA and ChatGPT


Partnering ChatGPT with EDA could bring a range of business benefits from improved response time and reduced outages to lowering energy consumption. The possibilities are endless – it’s a perfect pairing!

The way we live, learn, and complete tasks could potentially be transformed by 2030 with artificial intelligence and machine learning, which are estimated to boost the global economy by $15.7 trillion. Recently, OpenAI’s revolutionary ChatGPT has quickly become very popular in the technical community with its large range of inventive applications set to constantly improve. Event-Driven Architecture (EDA) can take this generative chatbot to the next level and unlock unexpected business-based potential for ChatGPT.

Let’s look at how the two work together.

ChatGPT is quickly becoming prominent in our daily lives whether it’s helping us draft emails and articles or generating blog ideas, its capabilities are ever-growing. In only its second month after launch, ChatGPT had 100 million active users in January, according to a study by USB, resulting in it smashing previous records to become the fastest-growing consumer application ever.

No more bumps in the road for ChatGPT with EDA

However, ChatGPT and generative AI have some drawbacks to overcome before reaching their full potential. These can be overcome with the incorporation of Event Driven Architecture (EDA) to act as the information link from systems that “publish” events to the systems that show interest in the information by subscribing to “topics.” Applications that are built with EDA allow for internal features to be grouped together to make them more responsive. So, whilst ChatGPT is invoked, EDA will take in all the requests and filter them, meaning ChatGPT will be more responsive, reduce its energy consumption, and even give new opportunities to B2B and B2C eCommerce business.

See also: Perspectives on Real-time and GenAI Convergence

5 reasons why EDA can take ChatGPT to the next level

1) Improve responsiveness with automatic answering

Today, ChatGPT operates in what we techies call a “request/reply” way. Ask, and ye shall receive, you might say. So now imagine if ChatGPT could proactively send you something it knows you’d be interested in!

For example, say you use ChatGPT to summarize and note action items from a Zoom meeting with a dozen participants. Instead of each participant raising a query, EDA would allow ChatGPT to send the notes to all attendees at the same time, including those who missed the meeting. Everyone would be automatically and instantly up-to-date on meeting outcomes, requiring significantly less load on ChatGPT since it proactively sends one message to a dozen recipients instead of satisfying a bunch of request/reply interactions over time, thereby improving service levels for users.

Any group activity needing the same suggestions, facilitated by ChatGPT, can benefit from this capability. For instance, teams working jointly on a codebase. Rather than ChatGPT suggesting changes/improvements to every developer in their IDE, users would have the IDE “subscribe” to suggestions, and then the underlying EDA technology would be able to push it out to all subscribed developers when they launch the codebase.

2) A green and energy-efficient future for ChatGPT

ChatGPT is very resource-intensive, therefore expensive, from a processing/CPU perspective, and requires special chips called graphical processing units (GPUs). And it uses quite a lot of them. The extensive GPU workload (now estimated to be upwards of 28,936) required to train the ChatGPT model and process user queries incurs significant costs, estimated to be between $0.11 to $0.36 per query.

And let’s not overlook the environmental costs of the model. The high power consumption of GPUs contributes to energy waste, with reports from data scientists estimating ChatGPT’s daily carbon footprint to be 23.04 kgCO2e, which matches other large language models such as BLOOM.

However, the report explains, “the estimate of ChatGPT’s daily carbon footprint could be too high if OpenAI’s engineers have found some smart ways to handle all the requests more efficiently.” So, there is clearly room for improvement on that carbon output.

By implementing EDA, ChatGPT can make better use of its resources by only processing requests when they are received instead of running continuously.

3) EDA will make ChatGPT outages a thing of the past

ChatGPT needs to handle a high volume of incoming requests from users. The popularity, rapid growth, and unpredictability of ChatGPT means it is frequently overwhelmed as it struggles to keep up with demand that can be extremely volatile and what we call ‘bursty.’ Today, this leads to “Sorry can’t help you” error messages for both premium and free ChatGPT users. These recent ChatGPT outages indicate how saturated the system is becoming as it struggles to rapidly scale up to meet its ever-increasing traffic and compete with new rivals such as Google Bard. So, where does EDA come in?

In the event of a ChatGPT overload, implementing EDA can buffer requests and service them asynchronously across multiple event-driven microservices as the ChatGPT service becomes available.

With decoupled services, if one service fails, it does not cause the others to fail.

The event broker, a key component of event-driven architecture, is a stateful intermediary that acts as a buffer, storing events and delivering them when the service comes back online. Because of this, service instances can be quickly added to scale because it doesn’t result in downtime for the whole system — thus, availability and scalability are improved.

With EDA assistance, users of ChatGPT services across the globe can ask for what they need at any time, and ChatGPT can send them the results as soon as they are ready. This will ensure that users don’t have to re-enter their query to get a generative response, improving overall scalability and reducing response time.

4) ChatGPT will take the AI e-commerce marketplace by storm and embed itself in business operations

AI plays a critical role in the e-commerce marketplace – in fact, it is projected that the e-commerce AI market will reach $45.72 billion by 2032. So, it’s no surprise that leading e-commerce players are trying to figure out how to integrate ChatGPT into their business operations. Shopify, for instance, has developed a shopping assistant with ChatGPT that is capable of recommending products to users by analyzing their search engine queries.

EDA has the potential to enhance the shopping experience even further and help B2C and B2B businesses learn more about their customers by tracking key events at high volume from e-commerce platforms to help businesses understand patterns in customer behavior, such as what items are the most profitable in certain regions and what factors influence purchasing decisions. This information can then be sent to a datastore for the ChatGPT machine learning model to predict customer behavior and make personalized product recommendations. This is only the beginning of the development of these sorts of models based on ChatGPT.

5) Satisfy rising demand levels with an event mesh and save vital resources

To deal with ChatGPT’s global user base, it needs a supporting software architecture capable of coping with the increasing demand and still efficiently distribute data – this is the perfect job for an event mesh. 

An architecture layer made up of a network of event brokers, event meshes can link events from one application to any other application, no matter its deployment. This takes away the stage of the application logic filtering ChatGPT results. Instead, data is sent out on-demand directly to those interested subscribers. Making the overall user experience better with improved responsiveness and reduce the strain on network and computer resources. 

EDA takes ChatGPT to the next level

Despite already smashing records, the journey is only just beginning for ChatGPT. And with its new features, ChatGPT has a bright future. Partnering it with EDA could bring a range of business benefits to both B2C and B2B organizations – from improved response time and reduced outages to lowering energy consumption. The possibilities are endless – it’s a perfect pairing!

Thomas Kunnumpurath

About Thomas Kunnumpurath

Thomas Kunnumpurath is the Vice President of Systems Engineering for Americas at Solace where he leads a field team across the Americas to solution the Solace PubSub+ Platform across a wide variety of industry verticals such as Finance, Retail, IoT and Manufacturing. Prior to joining Solace, Thomas spent over a decade of his career leading engineering teams responsible for building out large scale globally distributed systems for real time trading systems and credit card systems at various banks.

Leave a Reply

Your email address will not be published. Required fields are marked *