Cloud Native Blog - Container Solutions

Are Containers the Best Cloud Native Tool?

Written by Anne Currie | May 22, 2017 12:42:14 PM

Earlier in this blog series we described how every strategy comprises a goal and the actions we take or tools we use to accomplish it. We’re now going to consider some of the tools that Cloud Native uses, including container packaging, dynamic management, and a microservices-oriented architecture.

In this post we’ll consider container packaging - what it is and the effect it has. But first, let’s take a big step back. What the heck are we running on?

IaaS, PaaS or Own Data Centre?

Before we start talking about software and tools, a good question is where is all this stuff running? Does Cloud Native have to be in the cloud? I.e. does a Cloud Native strategy have to use infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) with the physical machines owned and managed by a supplier like Microsoft, Google or AWS? Or could we build our own servers and infrastructure?

We’d argue that Cloud Native strategies generally exploit the risk-reduction advantages of IaaS or PaaS:

  • Very fast access to flexible, virtual resources (expand or contract your estate at will). This changes infrastructure planning from high to low risk.
  • Lower cost of entry and exit for projects. The transition from CAPEX (buying a lot of machines up front) to OPEX (hiring them short term as needed) de-risks project strategy by minimizing sunk costs and making course corrections or full strategy shifts easier.
  • Access to commoditized, cloud-hosted, managed services like database-as-a-service, load balancers and firewalls as well as specialist services like data analytics or machine learning makes it faster and easier to develop more sophisticated new products. This helps identify opportunities more quickly and reduce the risk of experimentation.

These advantages can potentially be duplicated by a large organisation in their own data centers - Google, Facebook and others have done so. However, it is difficult, distracting, time consuming and costly and therefore a risky process. For many Enterprises it is more efficient to buy these IaaS/PaaS advantages off-the-shelf from a cloud provider. Fundamentally, if you have a tech team who are smart enough to build a private cloud as well as Google or AWS then is that the best way for your business to use them?

Cloud Native systems don’t have to run in the cloud but Cloud Native does have tough prerequisites that are already met by many cloud providers, increasingly commoditized, and difficult to build internally. To be honest, I’d probably use the cloud unless I was Facebook.

Containers! They’re so Hot!

In the current Cloud Native vision, applications are supplied, deployed and run in something called a “container”. A container is just the word we use to describe wrapping up all the processes and libraries we need to run a particular application into a single package and putting an interface on it to help move it around. The original and most popular tool for creating these containerised applications was Docker.

Containers are so hot because containerisation accomplished three incredibly sensible things:

  • A Standard Packaging Format - Docker invented a simple and popular packaging format that wrapped an application and all its dependencies into a single blob and was consistent across all operating systems. This common format encouraged other companies and tonnes of startups to develop new tools for creating, scanning and manipulating containerised applications. Docker’s format is now the de-facto standard for containerised application packaging. Docker packages or “images” are used on most operating systems with a wide set of build, deployment and operational tools from a variety of vendors. The image format and its implementation are both open source. These container images are  “immutable” - once they are running you cannot change or patch them. That also turns out to be very handy from a security perspective.
  • Lightweight Application Isolation Without a VM - A “container engine” like Docker’s Engine or CoreOS’s rkt is required to run a containerised application package (aka an “image”) on a machine. However, an engine does more than just unpack and execute packaged processes. When a container engine runs an application image it limits what the app can see and do on the machine. A container engine can ensure that applications don’t interfere with one another by overwriting vital libraries or by competing for resources. A running containerised application behaves a bit like an app running in a very simple virtual machine but it is not - the isolation is applied by the container engine process but enforced directly by the host kernel. A container image once running is referred to as just a “container” and it is transient - unlike a VM a container only exists while it is executing - after all it's just a process with some limitations being enforced by the kernel! Also unlike a heavyweight VM a container can start and stop very quickly - in seconds. We call this potential for quick creation and destruction of containers “fast instantiation” and it is fundamental to dynamic management.
  • A Standard Application Control Interface - Just as importantly, a container engine also provides a standard interface for controlling running containers. This means third-party tools can start and stop containerised applications or change the resources assigned to them. The concept of a common control interface for any application running on any operating system is utterly radical and is, again, vital to dynamic management.

Together, these 3 revolutionary innovations have changed our assumptions about how data centers can be operated and about how rapidly new applications can be deployed. Hurray!

Alternatives to Containers

Now that these concepts of standardised application packaging, isolation and control are out there we’re already seeing alternative approaches being developed that provide some of the same functionality. For example

  • Serverless or function-as-a-service products like AWS Lambda (cloud services that execute user-defined code snippets on request).
  • Unikernels and their ilk (potentially self-sufficient application packages that also include the minimum required operating system).
  • Applications inside new lighter-weight VMs.

In addition, other container types to Docker exist and even more ways to achieve the benefits of containers will undoubtedly be developed. However, what’s important is understanding the advantages of common packaging, control interfaces, and application isolation even if in 5 years we end up using something other than containers to provide these features.

ASIDE - to avoid confusion, although the interface for managing Docker images is consistent across all operating systems, the contents of the image are not necessarily portable. The contents of a container image are a set of executables. A Linux image will only include executables compiled to run on Linux. A Windows image will only include exes and dlls compiled to run on Windows. You therefore cannot run a Linux container image on Windows or a Windows container image on Linux any more than you can run an executable compiled for one on the other. However, once the containers are running on the right host OS you can control them all with the same format of API calls. Remember - the container engine is not an environment like Java. A container runs natively on the host so the executables must be compiled for that OS.

Is a Container As Good As a VM?

Before we get too carried away, there are still ways a VM is better than a container. On the downside:

  • a VM is more massive than a container
  • a VM consumes more host machine resources to run than a container
  • it takes much longer to start and stop (minutes vs seconds).

In the VM’s favour, however, it is a much more mature technology with years of tooling behind it. Also, containers isolate processes but they don’t do it perfectly yet - especially for antagonistic applications. The VM’s heavyweight approach is currently more secure.

Why is Everyone Mad About Containers Anyway?

The reason everyone’s going crazy about containers is not just because they are a nice packaging format that plays well with automated deployments. Containers also provide us with lightweight application isolation and an application control API. Paired with dynamic management that can give us automation, resilience and much better resource utilisation, which is both greener and cheaper.

But more on that next ....

Read more about our work in The Cloud Native Attitude.

Photo: Banksy in New Orleans  https://www.flickr.com/photos/29350288@N06/2818267461/