This post was produced in partnership with Mesosphere

Fulfilling the True Promise of Cloud Computing

PinIt

This post is the fifth in the series “Containers Power Agility and Scalability for Enterprise Apps.” To view the series from the beginning, click here.

One of the great promises of cloud computing that’s never been met is making it easy for organizations to move workloads between multiple clouds. Each cloud computing environment is built on top of a unique implementation of a hypervisor. Unless each cloud is built using the same hypervisor, moving workloads between, for example, a local private cloud based on VMware to a public cloud running hypervisors built by Amazon Web Services (AWS) requires IT organizations to refactor their applications. The only way around that issue is to package application workloads in Docker containers that can be deployed on top of multiple instances of different hypervisors running on multiple types of Linux operating systems.

Docker containers enable that portability by providing a lightweight mechanism to encapsulate all the libraries, configuration files, and application binaries needed for an application image to run. That inherent portability means a Docker image can run anywhere, including on multiple hypervisors, within a platform-as-a-service (PaaS) environment; or on bare-metal servers running, for example, container orchestration software such as Kubernetes as an alternative to relying on traditional hypervisors.

See also: Mastering the art of container orchestration

But containers are only a means to much larger cloud computing end. Docker containers are essentially providing the foundation for an emerging Cloud 2.0 platform, enabling IT organizations for the first time to mix and match public and private cloud computing resources within the context of a highly distributed hybrid cloud computing environment, as they see fit.

Defining Cloud

The real goal when it comes to cloud computing has never been to simply replace on-premises IT infrastructure with an external service provider. In fact, cloud computing as defined by the National Institute of Standards (NIST) makes it clear that cloud is a model for delivering IT resources rather than a specific platform.

Specifically, NIST says, “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources … that can be rapidly provisioned and released with minimal management effort or service provider interaction. The cloud model is composed of five essential characteristics, three service models, and four deployment models.”

The five essential characteristics of a cloud are:

  1. On-demand self-service
  2. Broad network access
  3. Resource pooling
  4. Rapid elasticity or expansion
  5. Measured service

The service models span software, platform, and infrastructure; and four deployment models consist of private, community, public and hybrid.

Talking SMACK

But like all things in IT, cloud deployment models are fluid. Thanks to the rise of edge computing, we’re starting to see how a new generation of distributed applications in the Cloud 2.0 era will blur the lines between deployment models. Internet of Things (IoT) applications, for example, require data to be processed in real time on a local IoT gateway, close to where data is being generated.

In fact, a whole new stack of software will be required to support these applications. That stack of software includes:

  • Spark: A general engine for large-scale data processing, enabling analytics from SQL queries to machine learning, graph analytics, and stream processing.
  • Mesos: Distributed systems kernel that provides resourcing and isolation across all the other SMACK stack components. Mesos is the foundation on which other SMACK stack components run.
  • Akka: A toolkit and runtime to easily create concurrent and distributed apps that are responsive to messages.
  • Cassandra: Distributed database management system that can handle large amounts of data across servers with high availability.
  • Kafka: A high throughput, low-latency platform for handling real-time data feeds with no data loss.

Collectively known as the SMACK stack, the one thing that unifies these platforms is the Docker containers and associated orchestration and application-aware scheduling software required to move data between the microservices that span them. As orchestration and application-aware scheduling technologies continue to mature, the boundary between private and public clouds is blurring. Most organizations will wind up building some form of a series of distributed private clouds, augmented by any number of public cloud services running, for example, analytics applications.

Achieving the Promise of Cloud

Public clouds are not the right answer to every IT question. Because of issues pertaining to performance, cost and security, not every application workload lends itself to being deployed on a public cloud. Long-running applications, for example, tend to be more expensive to deploy on a public cloud over time than it is to run them in a local data center.

See also: A brief history of containers

In other scenarios, trying to access compute resources across a wide area network simply introduces too much latency. Finally, there will always be classes of applications that organizations are not going to be comfortable deploying on shared infrastructure that they don’t directly control. Applications spanning public and private clouds, however, will ultimately need to be unified.

It’s worth noting that most of the original interest in cloud computing was driven by a desire to save costs by moving some classes of application workloads to the cloud. As part of that shift, many IT organizations also discovered that public clouds provided them with a level of flexibility that is hard to replicate in a legacy on-premise IT environment. As IT becomes more automated, it’s now becoming feasible to deploy highly distributed applications spanning multiple classes of cloud computing platforms.

The challenge and the opportunity going forward is not simply to replicate an internal IT environment on a public cloud. Rather, enterprise IT organizations – thanks to the rise of containers, automaton and the SMACK stack – can now access unprecedented amounts of resources that can dynamically scale in a way that is much simpler to manage. In fact, until the twin goals of scalability without platform constraint and simplified management are achieved, the true promise of cloud computing remains arguably unfulfilled.

Leave a Reply

Your email address will not be published. Required fields are marked *