This post was produced in partnership with Mesosphere

Maximizing the Value of Containers with Freedom of Choice

PinIt

This post is the sixth in the series “Containers Power Agility and Scalability for Enterprise Apps.” To view the series from the beginning, click here.

The great portability promise of containers often winds up turning into something of a mirage. Arguably, the best thing about containers is that they package application code with the libraries, configuration files, and binaries needed to enable an image to run anywhere. But once those containers arrive at their intended platform destination, it’s not too long before various proprietary extensions, configurations, and APIs in that environment make it next to impossible to port a containerized application to another platform.

What little portability of containers that does exists is confined to Linux platforms. A containerized application built for Linux will not run on Windows – or vice versa – unless there’s a LinuxKit installed on Windows. (LinuxKit is a set of tools created by Docker, Inc., that allow developers to create a “Linux subsystem” within a container that enables it to be deployed on Windows.)

Similarly, a containerized application built for Linux will not run on MacOS or Android. Developers can’t take a containerized application built for x86 systems and move it to systems based on ARM processors. They can, however, build containerized applications that will run on those ARM systems.

It’s worth noting that not every containerized application is backwards and forwards compatible with older versions of Docker. Also, an application built using Docker won’t run on top of Linux Containers (LXC), the original form of containers on which Docker containers are based.

Every platform is unique…maybe too unique

Further complicating portability, each platform tends to have its own unique storage and networking services that conspire to lock any application into that platform. Persistent storage configurations for containerized applications, for example, will need to be altered when moving containers from one platform to another. There’s also a lot of interoperability nuances when it comes to container registries that are just now being addressed by an effort led by the Open Container Initiative to standardize the protocol implementations employed by container registries.

The sad truth of the matter is that virtual machines based on hypervisors are a lot more portable than containers, which is one reason why so many containers are still deployed on virtual machines rather than bare-metal servers. Of course, applications still can’t easily be ported between different hypervisors. An application running on VMware, for example, still needs to be refactored when deployed on virtual machines developed by Amazon Web Services (AWS), which are now based on the open source Kernel-based Virtual Machine (KVM). Previously, AWS relied on open source Xen hypervisors.

Put it all together and it becomes easy to see how many ways there are to get locked into one platform or another. What’s required is another level of abstraction that prevents containerized applications from getting locked into one platform or another. Case in point are the Pods that Mesosphere made available starting with version 1.9 of the DC/OS platform.

The DC/OS platform is based on open source Mesos software, which abstracts CPU, memory, storage, and other compute resources away from both virtual and physical machines in a way that allows distributed applications to be deployed anywhere. Pods take that concept a step further by making it possible for Docker or Mesos containers to share the same storage and networking namespace. Containers in each Pod instance are guaranteed to be deployed, scaled and terminated together on the same host. If a container inside a Pod instance fails or does not respond to a health check, the entire Pod instance gets restarted or relocated to another node. Mesos even provides functionality to run non-Dockerized applications

Without that capability, application developers lose flexibility. Organizations can’t easily leverage on-premise and cloud resources on their terms. IT organizations have no leverage when it comes to future costs. No matter how much less expensive a rival platform may become in the future, the underlying application is locked into that platform unless there’s a layer of abstraction that ensures its portability. That’s especially problematic when organizations decide they want to move workloads from public clouds back on to private clouds running on-premises.

Developers and IT leaders need to be very clear about the benefits and limitations of containers. There’s no doubt that containers enable resilient applications based on microservices to be constructed faster. But when it comes to portability, there is no such thing as magic. Flexibility and control over multiple cloud computing platforms is never given. Rather, it’s something IT organizations need to resolutely determine to seize for themselves by ensuring that every layer of the stack supporting any given application is not just interoperable, but also truly open in every possible way intended.

Leave a Reply

Your email address will not be published. Required fields are marked *