This post was produced in partnership with Mesosphere

Mastering the Art of Container Orchestration

PinIt

This post is the second in the series “Containers Power Agility and Scalability for Enterprise Apps.” To view the first post, click here.

When it comes to deploying microservices based on containers, there really is strength in numbers. Containers make it simpler to deploy microservices than any other platform. The more microservices there are, the more resilient the application environment becomes. Should a microservice fail, it doesn’t necessarily take down the entire application.

The challenge, though, is that the more containers there are, the more difficult they become to orchestrate and manage. To address that challenge, IT organizations today are implementing container orchestration platforms.

At a base level, container orchestration provides the mechanism through which IT organizations provision hosts for containers, instantiate a set of containers, reschedule containers that fail to run, link containers together via application programming interfaces (APIs), scale container clusters up and down by adding or subtracting containers, and expose services to machines outside the cluster. The containers encapsulate all the libraries, configuration files, and application binaries needed for an image to run. Multiple containers are then employed to create a microservice that can either run on single cluster or be distributed across multiple clusters. A container cluster is made up of at least one cluster master and multiple worker machines, called nodes.

Where things get especially complex from an IT management perspective is when new functionality gets added to a microservice. Developers no longer patch applications to add new functions. Instead, they replace containers within a microservice with another container that includes the new functionality – which may be anything from a bug fix to address a security issue, to a completely new feature. Thanks to container orchestration and advances in agile development methodologies, the rate at which containers are updated or replaced is exponentially higher than any other approach to developing and maintaining software.

Orchestration Platforms Explained

Today, there are primarily two leading platforms when it comes to container orchestration. One is Docker Swarm, a proprietary platform developed by Docker, Inc. The other is an open source orchestrator called Kubernetes that is developed by a community of vendors and end users and available as a commercially supported platform from a number of different companies.

Groups of containers and volumes colocated on the same Kubernetes cluster are known as pods. Containers in the same pod share the same network namespace and can directly communicate with each other. A replication controller is employed to schedule pods that each have their own labels for identification purposes.

While Kubernetes has emerged as a de facto standard, installing and operating Kubernetes at scale is not a simple endeavor. In fact, it’s worth noting that not all distributions of Kubernetes are created equal. Some companies employ a forked version of Kubernetes that is usually out of sync with the latest upstream release of Kubernetes. Others are simply providing basic support around the upstream distribution of Kubernetes.

Obviously, Kubernetes is also available as a cloud service. But the economies of scale associated with running Kubernetes in the cloud often make that an expensive proposition. Nor is that approach an option for enterprise IT organizations that either value control over their IT environments, or are required to own their IT infrastructure to meet regulatory requirements.

Vendors are now stepping up to make Kubernetes more viable for these IT organizations. One example is Mesosphere, which created DC/OS, a distributed computing platform based on open source Apache Mesos software that provides all the attributes of a cloud platform necessary to make cloud native tools such as Kubernetes easy to deploy and operate anywhere. DC/OS provides a single control plane on top of compute and storage resources that can be managed as one logical cluster. This makes it a more powerful way to manage containers and data services when compared to simply running Kubernetes, which only provides a declarative model for provisioning clusters.

In general, containers are being used to develop greenfield applications that are both stateful and stateless, in addition to making it possible to “containerize” a legacy application in a way that enables developers to lift and shift that code base onto a public cloud. The legacy application itself is still a monolithic entity. But once a legacy application becomes containerized, many IT organizations over time start to carve it up into a series of more manageable microservices.

Stateful applications, however, still don’t run all that well in container environment, which presents yet another reason to consider deploying Kubernetes on top of a management platform like DC/OS.

Best Practices

There are several significant management issues IT organizations need to keep in mind when it comes to deploying containerized applications.

The truth is, microservices based on containers require a lot more than just a container orchestration system. Everything from monitoring tools to continuous integration/continuous deployment (CI/CD) platforms are critical elements of a container ecosystem. All these tools need to be managed and supported over their entire lifecycle. As a result, the container orchestration platform can wind up being more difficult to maintain than the underlying IT infrastructure it abstracts because changes to the tools and platforms are typically frequent.

IT operations teams should look for systems that are not cobbled together (and thus require piecemeal upgrading). They need to be able to consistently manage the entire ecosystem that is required around container orchestration. IT teams would also be well-advised to automate whenever and wherever possible. Installing a tool is often relatively simple compared to manually updating it on each node it runs. The challenge is that most container tools and platforms are themselves being rapidly updated.

Container orchestration spanning multiple environments is also a major challenge. Creating a single fabric over many data centers, cloud environments, and edge computing regions is extremely difficult. Unless a platform enables orchestration across multiple environments, IT operations will only have high availability for a single zone.

IT teams should also be wary of letting costs in container environments spiral out of control. Some vendors charge a very high amount to support a container orchestration platform deployed on-premises. Every time an organization needs an additional service like brokering or messaging, there’s an additional cost. In a distributed computing environment, all those “extras” can add up quickly. IT organizations should look for a platform where most of those capabilities are already baked in.

Organizations should also be careful not to get locked into a single vendor. Even when a tool is based on open source code, there are many instances where there is only one vendor that sells and supports that tool. Platforms that are extensible are critical, because no one can say for sure what great, new technology might be coming down the pike. The only certainty is that next great thing is already on its way.

Finally, just because it’s possible to acquire a platform based on technologies developed by Google, doesn’t mean organizations have the skills or appropriate culture to run the platform. Kubernetes was designed by engineers for engineers. Most IT organizations rely on administrators. IT personnel not only need to learn new skills, they must also adjust to working more closely with application developers within the context of a well-defined set of DevOps processes.

Summary

The most important thing to remember, however, is that an architecture built around microservices running in containers is complex. Not only do they require everyone in the organization to master new skill sets, but also all the interactions between microservices need to be monitored and maintained. Containers, in many ways, are ideal for distributed applications that need to dynamically scale resources up and down. But if an IT organization has a monolithic application already deployed that runs perfectly well enough, then rewriting the entire code base of that application because microservices are cool isn’t worth all the time and effort.

In fact, as a practical matter, most IT organizations will find themselves managing monolithic applications alongside ones built using microservices for years to come. Given that reality, most IT organizations are going need a management and orchestration framework that extends across both monolithic and microservices applications, stateful and stateless, versus simply focusing all their efforts on a platform that is optimized for only a subset of their application portfolio.

For more information on containerization and how containers work in distributed, scalable systems, please visit https://mesosphere.com/blog/containers-distributed-systems/

Leave a Reply

Your email address will not be published. Required fields are marked *