SHARE
Facebook X Pinterest WhatsApp

Marrying OpenAPI and Kubernetes to Prevent Scalability Bottlenecks

thumbnail
Marrying OpenAPI and Kubernetes to Prevent Scalability Bottlenecks

Wheel helm on container computer developer app concept. Business digital open source program. Data coding steering 3D low polygonal vector line illustration.

Exposing your services as APIs with the OAS and managing spikes in demand with Kubernetes can help ensure that your users never notice a disruption because, after all, “the show must go on.”

Written By
thumbnail
Dan Ciruli
Dan Ciruli
Mar 6, 2023

Like most Taylor Swift fans, I was disappointed in the Ticketmaster crash during the presale for her upcoming tour. The preorder fiasco reminds me of bottlenecks that all enterprises and organizations encounter on their digital transformation journey while they strive to balance the delivery of stable products with rapid application development to drive innovation and meet evolving customer needs. At some point, their infrastructure will be calling out, “It’s me! Hey! I’m the problem! It’s me!” 

I was at Google when we cofounded the OpenAPI Initiative and rallied support behind the OpenAPI Specification (OAS). I’ve watched how a series of open-source and open-spec projects like OAS have transformed the technology industry. If properly implemented, these projects can help companies avoid debacles like the one experienced by Ticketmaster.

As I’m witnessing the marriage between the openness and integration enabled by OAS and the efficiency and scalability empowered by Kubernetes, I see how this marriage can produce resilient and agile applications, enabling organizations to avoid the type of service crash we saw with Ticketmaster, and perhaps the recent FAA outage that grounded thousands of flights nationwide.

A lingua franca for interoperation

The open standard and the language-agnostic interface offered by OAS greatly improves the communication between developers, eliminating the skills needed to read source code or write new code and documentation when connecting to different systems. This enables developers to more efficiently access other platforms or enterprises’ APIs and quickly provide new services to their customers. The ticket ecosystem is one example in which the integrations made possible by APIs have enabled the industry to flourish, and consumers have more ability than ever to buy tickets (especially on the resale market!). 

The advantages of a common specification have led to widespread acceptance and use by enterprises. According to OpenAPIs.org, there are thousands of API developers using OAS to document services worldwide, and enterprises are using OAS to share, collaborate, and provide services to other enterprises and customers. This has made OAS the lingua franca for interoperation among enterprises. 

Although we think of the OpenAPI spec as the common language through which developers can speak to one another, there’s another benefit: OpenAPI is also how we speak to our API infrastructure. All modern API management infrastructure uses OAS to describe how an API will be managed. Standardizing the language we use to speak to our infrastructure is fantastic: The open, collaborative, and communicative nature of OAS makes it simpler and more efficient to manage and even replace infrastructure. 

As we standardize how we speak to and configure our infrastructure, it gives enterprises the ability to choose the right solution for their problem. That means that if your old platform can’t scale the way you need it to today, it’s easier to change to a solution that’s scalable enough. 

See also: Are Industry-Specific APIs the New Norm?

Advertisement

Kubernetes scales all businesses

The Ticketmaster presale fiasco also highlights the need for enterprises to be able to scale on the back end. Scalability is critical nowadays because digital transformation is causing higher volumes of traffic than enterprises have ever experienced, very similar to what we recently saw with Ticketmaster. Being able to scale quickly and efficiently can mean the difference between a sold-out tour and irate fans. Naturally, Kubernetes lets enterprises quickly deploy and manage cloud-native applications and, critically, scale them.

While enterprises are moving to cloud-native for a more scalable infrastructure and applications deployments, Kubernetes’s wide adoption across the globe has made it the de facto choice in modern-day architectures for container orchestration management. According to our recent Enterprise Kubernetes report, three-quarters (75%) of enterprises are utilizing Kubernetes for production (40%) and development or pre-production (35%). 

Because it’s non-vendor-specific, Kubernetes has become the way we describe how a workload gets to a server, and its universality and portability mean it can be used on any cloud provider and in any data center. This means that companies like Ticketmaster can more easily than ever ensure that they are using a provider that is able to meet all their scalability needs. 

Advertisement

OpenAPI + Kubernetes accelerates business

Enterprises leverage Kubernetes to consistently and efficiently scale applications and services from the back-end cloud-native infrastructure, while OAS uses an open interface language to bring services to a broader market and customer base. The two, together, provide efficient and integrated management and operational advantage for enterprises, from infrastructure management in the back end to integration with other services and providing convenience for customers in the front end.

There is yet another major advantage to adding both of these technologies to a modern cloud-native strategy: Scalability at once!

I mentioned that the connections OAS brings with other businesses and services lead to more market opportunities. As traffic through APIs grows rapidly, enterprises need to scale API infrastructure and API gateways to ensure that the quality of service is not impacted. Here’s where Kubernetes can make a big difference. The inherent scalability of Kubernetes can keep enterprises agile and efficient in the face of massive traffic spikes, as its automatic scaling lets DevOps teams scale up or down to meet demand faster.

With this natural scalability advantage, when choosing to deploy API gateways with Kubernetes, enterprises can achieve the same level of scalability without all the fuss of figuring out another way to do it. In other words: the same technology that scales the back ends can scale the API infrastructure itself.

At Google, we took advantage of container-based API management infrastructure to run several of the most-used APIs on the planet (think Google Maps, Google Cloud, and the GSuite APIs). It was the combination of a standard way to describe those APIs to the world, a standard way to describe those APIs to the infrastructure, and a container-based horizontal scalability model that made it all happen.

Advertisement

Now is the right time to combine

Marc Andreessen said in 2011 that software was eating the world, and his prediction became a reality. In fact, Kubernetes and the OpenAPI Specification are playing a big role in enabling software to eat the world. OAS ensures that developer teams across companies and industries can communicate with one another. And Kubernetes has ushered in an era of horizontal scalability. Together, OAS and Kubernetes have standardized how we describe software services to the infrastructure on which they run, giving us the unprecedented ability to choose the right infrastructure providers to meet customer needs.

As more companies turn into software companies, the user experience will become the predictor of success in today’s digital era. Exposing your services as APIs with the OAS and managing spikes in demand with Kubernetes can help ensure that your users never notice a disruption because, after all, “the show must go on.”

thumbnail
Dan Ciruli

Dan Ciruli is VP of Products at D2iQ. He is a product leader who has focused on technical productivity for his entire career. At Zuora, he led the Platform product management team and was the general manager for the Zuora platform business. During his seven years at Google, his team built and managed Google's API-serving infrastructure. He was a founding member of the Open API Initiative and sat on the Istio Steering Committee, and he was also the first product manager on gRPC. His commercial products at Google included Google Cloud Endpoints and Anthos Service Mesh.

Recommended for you...

Real-time Analytics News for the Week Ending December 27
2025 Year in Review: Top 5 RTInsights Articles of 2025
Real-time Analytics News for the Week Ending December 6
Real-time Analytics News for the Week Ending November 29

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.