SHARE
Facebook X Pinterest WhatsApp

Bringing Instant Scale Using Containerization

thumbnail
Bringing Instant Scale Using Containerization

This post is the third in the series “Containers Power Agility and Scalability for Enterprise Apps.” To view the series from the beginning, click here.

Written By
thumbnail
Michael Vizard
Michael Vizard
May 8, 2018

Developers tend to love containers because they provide a higher level of abstraction that isolates applications from the vagaries of the underlying IT infrastructure, regardless of whether they’re running on a physical or virtual machine. That capability makes it possible for developers to build applications faster, that can run almost anywhere. But while containers make building applications simpler, they very often serve to make managing IT operations more complex, especially when multiple thousands of containers that can come and go in an instant are deployed at scale in a production environment.

IT operations teams need to solve this problem quickly, because organizations of all sizes are embracing microservices based on containers to inject more agility and elasticity into their IT environments. Not only are more applications being assembled by combining disparate microservices together, but also new functionality is added to an application by swapping out one set of containers for another.

Requirements for Managing Containers

There are two fundamental requirements for deploying and managing containers at scale. The first is addressed by container orchestration software employed to provision hosts for containers, instantiate a set of containers, reschedule containers that fail to run, link containers together via application programming interfaces (APIs), scale container clusters up and down by adding or subtracting containers, and expose services to machines outside the cluster.

The second requirement is the need for a resource manager to granularly allocate memory, compute, and storage resources to containers that dynamically appear without warning. Containers are arguably the most ephemeral atomic units of computing ever invented, which makes allocating IT infrastructure resources a complex challenge.

This second issue is critical because containers that exceed memory limits will fail, which can create a cascading series of failures across first the microservice, and possibly even the entire application. Given all the dependencies between microservices based on containers, the overall resiliency of the environment depends heavily on the hand-in-glove relationship established between container orchestration software such as Kubernetes and a resource manager such as Apache Mesos.

Of course, to a limited degree, orchestration software such as Kubernetes can employ a scheduler built into the cluster to manage infrastructure resources for a pod, or group, of containers. But a more sophisticated approach to allocating IT infrastructure resources is required as more pods get deployed on the same cluster. More challenging still, IT operations teams require a consistent approach to both managing clusters and the data pipelines that are now rapidly proliferating across the enterprise. Those approaches need to span everything from being able to consistently make the right data available for any given container or microservice, to making sure there is enough fault tolerance built into the environment should certain IT infrastructure resources suddenly become unavailable.

Operational Challenges

One of the biggest operational challenges IT operations teams will face is managing container clusters wherever they appear. Containers are designed to be deployed on virtual and physical machines that can be deployed either on premises or in a public cloud. Wherever they are deployed, IT organizations will require a single pane of glass through which the overall environment can be managed.

Managing IT at scale is not a new problem, but containers add to an already complex situation. Creating a single fabric over many data centers, cloud environments, and edge computing regions is extremely difficult. IT teams also need to be wary of letting costs in container environments spiral out of control. Some vendors charge a very high amount to support a container orchestration platform deployed on-premises. Every time an organization needs brokering or a message service there’s an additional cost.

The container orchestration platform can also become more difficult to maintain than the underlying IT infrastructure it was meant to abstract. IT operations teams need systems that are not cobbled together and, as a consequence, require piecemeal upgrading. In fact, given the dynamic nature of container environments, IT teams need to be able to automate whenever and wherever possible using platforms such as Mesosphere DC/OS, based on the previously mentioned Apache Mesos, that are designed from the ground up to address these very issues.

Container Management Best Practices

Best practices that can be readily implemented via DC/OS include everything from managing the lifecycle of containerized applications at scale; to embedding health-check monitoring capabilities into the microservice; to continuously gathering telemetry data to determine whether applications are working as intended. Armed with this data, IT operations teams can create dashboards to track service-level indicators capable of highlighting scalability issues long before they become an actual operational problem.

In fact, microservices should be designed for failure. In the event of an IT infrastructure issue, the DC/OS platform, for example, makes it possible to degrade service gracefully using an application-aware scheduler versus allowing the entire application to crash. The platform that provides that management capability needs to be able not only to manage resources, but also to dynamically identify and track individual microservices using a declarative management framework that an IT administrator can easily master without requiring them to possess programming skills.

As the immortal Oscar Wilde once observed, “to expect the unexpected shows a thoroughly modern intellect.” As IT organizations continue to move toward modernizing their environments using containers, that advice has never been more applicable or relevant.

For more information on containerization and how containers work in distributed, scalable systems, please visit https://mesosphere.com/blog/containers-distributed-systems/

Recommended for you...

The Observability Gap AI Exposed
Tim Gasper
Jan 21, 2026
Data Immediacy’s Next Step
Smart Talk Episode 9: Apache Iceberg and Streaming Data Architectures
Smart Talk Episode 5: Disaggregation of the Observability Stack

Featured Resources from Cloud Data Insights

The Foundation Before the Speed: Preparing for Real-Time Analytics
Why AI Needs Certified Carrier Ethernet
Real-Time RAG Pipelines: Achieving Sub-Second Latency in Enterprise AI
Abhijit Ubale
Jan 28, 2026
Excel: The Russian Tsar of BI Tools
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.