When is the WRONG time to use Kubernetes?

The main reason most enterprises want to move to the cloud can be explained in two words: product velocity.

There are myriad benefits to Cloud Native computing, from lower hosting costs to reduced infrastructure complexity and increased scalability. All of these are good things, of course -- but by far the most motivating is the ability to move from idea to actual product in front of customers in the shortest amount of time.

Coupled with the lure of product velocity is the deceptive simplicity of container architecture. In this era of Everything-As-A-Service, dividing infrastructure into related containerised microservices simply makes intuitive sense. And, indeed, it all seems easy enough: if you want to run a single container, you can go online, click, and in ten seconds you have a fully-provisioned container running. Many organisations will do just that, experimentally: start with one container, it goes well, and they think how easy it is. So then they add another small app, it also goes well and they get approval from management to move forward with a full-fledged cloud migration. Trusting that Kubernetes will handle everything.

But there is complexity that is almost exponential as the process moves forward. Monitoring, storage, how different components are behaving together; defining communications, networking security...By the time a company realises there are problems inherent in their migration -- serious ones -- it’s too late.

Often, this “too late” pileup of problems is Kubernetes’ fault. Or, that is, implementing Kubernetes as the right solution, only at the wrong time.

Over the past three years, Container Solutions has built experience by successfully guiding a range of enterprises into the cloud. Through experimentation and observation, we have distilled the practical process of successful cloud migration into six iterative steps. Some of these steps can take place at different points, but -- as we have observed, and as some unfortunate enterprise migrations have learned first hand -- step number six must always be the last one.

What are these steps?


Step 1: Welcome to the Cloud.

The first step is self-evident: putting a few machines on AWS or another public cloud hosting platform. There are many promises about the dramatic savings to be realised when only paying for the resources you use, with no long-term commitment. Ultimately, though, many companies end up confused. They expected to save money by migrating their infrastructure to the cloud, only to find operating expenses have actually gone up. This, however, is short-term thinking.

The true savings lie in risk reduction and velocity, not cost. Operating on the cloud is much more flexible -- it DErisks a big part of what you do. When sourcing physical machines for a data center, you risk buying the wrong size or even wrong machine entirely. Then you are locked into that decision -- you own those machines. Which is not even your main problem! The engineering required to then adjust applications to an unsuitable environment is time-consuming and expensive, even if those costs are hidden. Human time is always by far the most expensive component; unfortunately it is not always taken into account because the enterprise is already paying for the time of its engineers. Salaries may even come from a completely different budget.

For many companies, then, the opportunity cost is thus rendered totally invisible (while the engineering departments are praised highly for unneeded automation efforts).

In cloud machines, however, you buy on spec. If they’re not right, you give them right back. It is so much less risk to treating provisioning as services you can just try. The other option would be taking two years to build it all yourself. Over those two years, it may well have been somewhat cheaper to build everything using in-house resources, but the financial savings are nominal. The true cost is product velocity.

By provisioning in the cloud, those same engineers get to spend those two years doing valuable work instead of fighting with the wrong environment or reinventing the wheel.

So the lesson of Step One is: Risk reduction, not cost reduction. Again, looking only at hardware costs it is true the cost will typically go up -- but this short-term thinking. It is also why, when our clients ask to view the new cost structure, we include the gains, from productivity increases and shortening time to market. Most important of all, we emphasise the opportunity cost related to these elements.

It's always better to be in a value creation business rather than a cost-saving one.


Step 2: Automation.

There are a lot of good services, cloud and non-cloud, that allow automating the deployment process, CI/CD, all kinds of tools to create architecture and terraform a bespoke cloud configuration. Automation is easier because all the VMs have good APIs in the cloud, vs. whatever must be built in your own data center.

In rare cases, we do find a very good level of automation in an enterprise's own private cloud implementation. Typically, however, the level of automation is significantly lower -- and comes with a much higher maintenance cost.


Step 3: Culture.

Another thing companies quickly come to realise when embracing Cloud Native techniques: It requires a completely different culture to take advantage of the cloud and automation. The traditional model is for organisations to be massively risk averse, to minimise uncertainty at all costs. Ultimately, wholesale risk aversion becomes embedded in the company culture so that it becomes impossible to even recognise the value of reducing risk -- much less try experimental approaches that take advantage of them.  

Experimental culture is the new low risk. It used to be that “low risk” meant “don’t experiment.” Cloud Native can’t just be bolted on top of existing company culture. An enterprise needs to shift its collective culture and mindset to match: to be ready to constantly iterate, step back, try again and try differently. New ideas, new processes, new products -- the goal is to shift from a culture of finding the “right” answer to an open approach to exploring and testing many possible answers. Because the bar has never been lower, and the risks more mitigated, than right now.

Cloud Native architectures are, by nature, conceptually different from traditional approaches. Successful complex Cloud Native builds merge careful up-front planning with a flexible and mutable implementation. On the surface this may seem an oxymoron: purposeful design applied to changeable, perhaps even fluctuating, deployment?  Isn’t that a recipe for chaos??

Actually, no. This is actually a recipe for carefully analyzed and intentional planning, iteratively implemented using constant assessment and testing to adjust and adapt as needed. Cloud Native combines the positive methodological elements of Waterfall (upfront design and architecture) and Agile (iterative experimentation). The resulting architecture is rapidly and beautifully adaptive -- rewarding experimentation when it works and rolling it back with no harm done when it does not.


Step 4:  Microservices.

The heart of Cloud Native architecture is regrooming the monolith into smaller chunks. Parallelising team efforts to write code independently and deploy independently.  

A microservice is an application with a single function, such as routing network traffic, making an online payment or analysing a medical result.

The concept of microservices themselves is not new. Knitting microservices together into functional applications is an evolution of service-oriented architecture (SOA), which has been around for awhile.

Conway’s law stipulates that software architecture resembles organisational structure. Essentially, that a hierarchical company produces a hierarchical system...Which just happens to be the most effective way to create a monolith.

In that scenario, even if a company moves to microservices, eventually they will still build a monolith anyway -- a monolith comprised of microservices!  -- even when that was far from the intended outcome. Which is why an open and honest assessment of organisational culture (see Step 3) is an essential component of Cloud Native design. Large-scale cultural change can come along slowly, while careful design counters any tendency to fall into a default Conway’s Law approach.


Step 5: Containerization.

Containers are simply a means of running a process -- quite often, but not always, a microservice --  inside an insulated scope, providing some or all of the required dependencies. Docker is the most prominent example of container technology

Microservices and containers complement one another quite nicely. You can build a container that holds all the required dependencies for a microservice, and then deploy this container anywhere you want without needing to install anything other than the Docker runtime.

Containers are a very innovative technology, and one that is not yet fully solved. Working with them requires flexibility and the ability to tolerate some ambiguity.  It’s a hell of a lot of work to containerize a monolith -- and not much benefit.


Step 6: Orchestrating your automated, containerized microservices!

This is where all the previously defined pieces fit together -- where all the steps, in whatever order taken, ultimately lead. Here, and only here, is also where Kubernetes comes in.

A true enterprise-level application will span multiple containers, which must be deployed across multiple server hosts. The containers function within a comprehensive container infrastructure that includes security, networking, storage and other services.

An orchestrator -- i.e. K8s -- deploys the containers, at scale, for required workloads while scheduling them across a cluster while scaling and maintaining them -- all the while integrating everything with the container infrastructure.

New call-to-action

Order in the Court?

Each one of these six steps is ridiculously hard, even when coming in with the right knowledge and experience. One reason it’s tricky is because the intuitive path leads to wrong results. That is, doing Cloud Native using old ideology will produce significant problems -- and essentially eliminate all the benefits from the transition.

Doing all six steps is a multiyear project, and very revolutionary for your business. You need to be sure your business will actually benefit. The point of the entire process is to be able to get ideas from someone’s head to production in 15 minutes, which is fantastic for software development but not so much for, say, a niche provider of specialised services or equipment.

Some of these steps can be done in different orders, with a few considerations to bear in mind. (Also: not every company will actually need to do every step).

First, it’s important to do automating early on. Ideally, as soon as you have established a cloud provider platform.  Even monoliths can benefit by going to the cloud, benefitting from the automation and containerization there.

It is even possible to utilise container architecture without using microservices, or even an orchestrator. If you want to get really unorthodox, you could even not go into the cloud at all and put an orchestrator in your own data center.  But if you do any these things before automation, it’s disastrous.

Second, and even more important: it is vital to do orchestration last.

Quite often, when Container Solutions goes to work with a company whose cloud migration has gone wrong, what we find is that they have put in an orchestrator before things were otherwise in place.

If you implement an orchestrator first, you are fighting a battle on simultaneous fronts. Using an orchestrator effectively is a highly complex endeavour; getting that right often depends on the flexibility, speed, and ability to iterate you have put in place first. The foundation of Cloud Native is cloud infrastructure (Step 1), automation (Step 2) and an experimental culture (Step 3). Those should be done first. Use your platform to build your platform -- before you start worrying about orchestrating all the pieces.

If you do Kubernetes before you’re ready, it is literally worse than not doing it at all.

And this happens more often than you would think. You go to these conferences and all these smart people tell you how great Kubernetes is, and how easy. As a result you see companies where they don’t need Kubernetes at all, where it is actually a terrible solution for their specific case, but they really, really want it anyway. And they move forward, not understanding the complexity of containers, and this great new technology fails -- badly. This is distressingly common.

Wanting to modernise your infrastructure is reasonable. But containerization is a new technology, not yet widely understood. Which is how it can so easily result in improper -- even disastrous -- implementation.

When implemented properly, microservices, containerization and orchestration can fundamentally change how development gets done.

Done wrong, it is an expensive waste of time.

Kubernetes Deployment Strategies


Leave your Comment