Composing the Containerized Cloud with Kubernetes and Istio

PinIt

An overview of how cloud containers are enabling a new ‘package’ of application advantages with burgeoning technologies, including Kubernetes and Istio.

Computing has always had an essentially composable nature. The way we assemble technology functions is born out of our ability to assemble software code in a variety of different ways; if we stay aligned to the structured syntax of the languages we use to develop, then we can compose, orchestrate, and create in an infinite variety of ways. Kubernetes and Istio can help. Here’s how.

Now that we have evolved to use the modern era of cloud computing technologies, that core composable advantage is still present. In fact, it’s more so.

We now build our cloud systems with an increasing proportion of containers, a term we use to describe a defined segment of computing logic that comes packaged with all the components needed to enable an application to execute and exist inside a given workflow. Those components would typically include the application code itself, the runtime environment, core system libraries, and a selection of system tools.

See also: Marrying OpenAPI and Kubernetes to Prevent Scalability Bottlenecks

Container evolution: The path to Kubernetes

Container technologies have evolved in stages over the past two decades. Key moments in the brief history of container time, of course, include Docker’s mainstream coming of age back in 2013. This DNA-level container platform was envisaged to take away the mundane tasks associated with setting up containers and progress developers onwards through the build, share, and run phases that would typically follow a container project’s inception.

Docker’s easy-to-use interface, its open source freedom factor, and its ability to package, provision, and run container technology helped ensure its uptake, popularization, and proliferation. According to Docker, “A container is a unit of software that packages code and its dependencies, so the application runs quickly and reliably across computing environments.”

Although Docker saw 10,000 developers sign up in its first month of release, and the project is still very much in existence, it lacked the container management that was to come in later innovations. Perhaps inevitably, Google had been working on its own approach to helping developers to manage the collective life cycles of their increasingly containerized workloads – in June 2014, Google introduced Kubernetes.

See also: Why Observability is Essential for Kubernetes

Orchestrated containers, a concerted effort

Kubernetes has fulfilled the essential void needed to orchestrate containers. Why? Because a bunch of containers on their own is nice but not very useful unless we can coalesce a selection of containerized services in the right order and sequence so that they work as an application.

We know that one container may execute one or multiple services, so if a system exists with many instances of containers, then we need the means to monitor and manage that universe of abstracted virtualized logic. That’s Kubernetes.

We can also use Kubernetes to help automate the way we scale container deployments (upwards, downwards, and horizontally outwards for new use cases) when software stack demands change. It can also automate container changeover and switching functions in the event of one instance failing or needing replacement; this is an adaptable, useful technology.

A cluster of Kubernetes nodes and pods

A Kubernetes cluster is constituted of a master node, worker nodes, and pods. A master node, known as the control plane, has a collection of components to control, schedule and communicate with the worker nodes to look after a container cluster’s total lifecycle.

The worker nodes, often referred to as the data plane, are constantly exchanging information with the master node to understand if there’s new work to do.

We also have pods. These work as a container wrapper and are hosted on the worker nodes. If a developer needs to scale an application, they add or remove pods. It is widely accepted best practice to have one container for each pod since Kubernetes manages pods rather than managing the containers directly. A Kubernetes cluster has at least one master node and one worker node.

Istio for observable traffic management

There’s another chapter to write here, and the entire story is probably still not written yet. Businesses need to not just scale their IT stacks, but also they need to be able to optimize how the traffic flows between microservices and throughout the container universe. They need to do this with minimal manual intervention in the most automated way possible.

Enter Istio. This technology works to modernize microservices-based apps and backends by securing, connecting, and monitoring the functions, containers, and other moving parts of the system. Istio brings standard, universal traffic management, telemetry, and security to complex deployments and helps organizations run distributed, microservices-based apps anywhere.

Istio also improves the native Kubernetes container orchestration tool by injecting additional security, management, and monitoring containers into each pod. The project was designed to enable container debugging to eradicate problematic code, illustrating errors in a waterfall-type diagram. It also provides core observability metrics to track system latency, enables workload balancing to navigate around constrained resources, and provides circuit breaking to avoid system crashes.

Although Kubernetes takes criticism for its complexity and steep learning curve, the technology itself is now arguably on a path to standardization. There is still work to do to ensure application robustness and redundancy, achieve finer-grained traffic division, and manage certain elements of security, but overall, the outlook is positive.

Containers still setting sail

Today, we know that containers work in mostly harmonious unity alongside other new architectural styles of software development, such as microservices, often managed via DevOps practices and methodologies. These small, single-purpose application services are able to integrate and communicate with each other through Application Programming Interfaces (APIs), meaning that each and every microservice can be updated or scaled independently.

Because we can now use Kubernetes to eliminate infrastructure lock-in by providing core capabilities for containers without imposing restrictions, we can combine features within a Kubernetes platform, including pods and services, while Istio adds observability, security, and reliability to distributed applications.

Edison revolutionized our lives by decoupling early iterations of the lightbulb from its previously hard-wired base and making it removable and more efficient, long-lasting, and economically viable. He really was onto something, don’t you think?

Alessandro Chimera

About Alessandro Chimera

Alessandro Chimera is the Director of Digitalization Strategy and an Industry Consultant at TIBCO, where he develops and communicates next-generation digitalization strategies and points of view. He provides guidance, empowering customers to digitally transform their businesses to innovate and grow. Alessandro collaborates with partners, analysts, and various internal teams, in addition to publishing white papers, articles, and blogs as part of TIBCO's global thought leadership team.

Leave a Reply

Your email address will not be published. Required fields are marked *