According to the Cloud Native Computing Foundation (CNCF) a Cloud Native strategy is about scale and resilience: “distributed systems capable of scaling to tens of thousands of self healing multi-tenant nodes”. This is incredibly useful for folk like Uber or Netflix who want to hyperscale an existing product, reduce their operating costs, and improve their margins. So is Cloud Native about power and scale?
But what about folk that want the Cloud to deliver something quite different: speed. For them Cloud Native is about getting new products and services to market faster. Whether a business is a startup or an enterprise trying to evolve more quickly, they want to use a Cloud Native architecture in order to innovate more rapidly. So is Cloud Native about speed?
But yet another group want to create and test new business ideas without large capital expenditure. They want to start small and grow as needed or bootstrap with minimal start up costs. Is Cloud Native all about costs?
So what the heck is Cloud Native? A way to move faster? A powerful way to scale? A way to reduce operational costs or capital expenditure? Or something else entirely? How can these different aims be achieved with one paradigm?
In this introduction to Cloud Native we’re going to explore its multiple meanings and how we can cut through the waffle to identify the right Cloud Native strategy for our specific needs. We’ll argue all of these goals (moving fast, being scalable, and reducing costs) are attainable but they need careful thought. Cloud Native has huge potential but it also has dangers.
What Is the Purpose of Cloud Native?
“Cloud Native” is the name of a particular approach to designing, building and running applications based on infrastructure-as-a-service combined with new operational tools and services like continuous integration, container engines and orchestrators. The overall objective is to improve speed, scalability and finally margin.
- Speed: companies of all sizes now see strategic advantage in being able to move quickly and get ideas to market fast. By this we mean moving from months to get an idea into production to days or even hours. Part of achieving this is a cultural shift within a business, transitioning from big bang projects to more incremental improvements. Part of it is about managing risk. At it’s best a Cloud Native approach is about de-risking as well as accelerating change, thus allowing companies to delegate more aggressively and thus become more responsive.
- Scale: as businesses grow, it becomes strategically necessary to support more users, in more locations, with a broader range of devices, while maintaining responsiveness, managing costs, and not falling over.
- Margin: in the new world of infrastructure-as-a-service, a strategic goal may be to pay for additional resources only as they’re needed - as new customers come online. Spending moves from up-front CAPEX (buying new machines in anticipation of success) to OPEX (paying for additional servers on-demand). But this is not all. Just because machines can be bought just in time does not that they’re being used efficiently. Another stage in Cloud Native is usually to spend less on hosting.
At its heart, a Cloud Native strategy is about reducing technical risk. In the past, our standard approach to avoiding danger was to move slowly and carefully. The Cloud Native approach is about moving quickly but taking small, reversible and low-risk steps. This can be extremely powerful but it isn’t free and it isn’t easy. It’s a huge philosophical and cultural shift as well as a technical challenge.
How Does Cloud Native Work?
The fundamentals of Cloud Native have been described as container packaging, dynamic management and a microservices-oriented architecture, which sounds like a lot of work. What does it actually mean and is it worth the effort? We believe Cloud Native promotes five architectural principles:
- Use infrastructure-as-a-service: run on servers that can be flexibly provisioned on demand.
- Design systems using, or evolve them towards, a microservices architecture: individual components are small and decoupled.
- Automate and encode: replace manual tasks with scripts or code.
- Containerize: package processes with their dependencies making them easy to test, move and deploy.
- Orchestrate: abstract away individual servers in production using off-the-shelf management and orchestration tools.
These steps have many benefits but ultimately they are about the reduction of risk. Ten years ago in a small enterprise I lay awake at night wondering what was actually running on the production servers, whether we could reproduce them and how reliant we were on individuals and their ability to cross a busy street. Then I’d worry about whether we’d bought enough hardware for the current big project. I saw these as our most unrecoverable risks. Finally, I worried about new deployments breaking the existing services, which were tied together like a tin of spaghetti. That didn’t leave much time for imaginative ideas about the future (or sleep).
In the world before IaaS, infrastructure-as-code (scripted environment creation), automated testing, containerisation and microservices (which rely on modern fast machines and networks) we had no option but to spend lots of time on planning, testing and documentation. That was absolutely the right thing to do. But the question now that we have these new tools is “is moving slowly our only option?” In fact, is it even the safest option any more?
We’re not considering the Cloud Native approach because it’s fashionable—although it is. We have pragmatic motivations: the approach works well with continuous delivery (faster time to value), it scales well and it can be very efficient to operate. However, most importantly it can help reduce risk in a new way—by going fast but small. It’s that practical reasoning we’ll be talking about later in this blog series. But first we are going to ask what is strategy anyway?
If you'd like to learn more about Cloud Native, grab a copy of our e-book.