Cloud native, Kubernetes, Build

Deep Dive: Deployment Automation for Applications on Kubernetes (Part 1)

This post is the first part of a series. Read the second part here.

Kubectl, if only by being the official CLI, is certainly the most popular way of interacting with Kubernetes clusters. And running kubectl apply is a convenient way to be able to work on the desired state in local files and sync it with a cluster.

Similarly, a quick kubectl apply is also easy enough to run as part of a CI/CD pipeline. This works reasonably well for a lot of common cases and is probably what quite a few teams out there are doing just now.

PSdivercoral

But although it mostly works, there are a few edge cases to be aware of, and they are topic of this deep dive. For teams using continuous deployment this means all team members need to understand the edge cases to know when they are required to take manual action before or after the automation run.

Kubectl Apply Edge Cases

Kubectl apply works behind the scenes by, and I’m simplifying, first checking if a resource exists. If it does not exist yet, it will create the resource. But if it already exists, it sends a patch to update the desired state of the resource.

It will do so on a per-resource basis. The order of the resources is based on the order of files in the filesystem, the order of the resources in a `kind: List`, or the order of the documents in a multi document YAML. And this is the first thing everyone on your team needs to know.

If configuring any resource fails, kubectl errors out. All changes to resources before the one that failed have been applied, while all changes after the resource have not—leaving the desired state in an undesired state (pun intended).

Resource dependencies

Resources like a namespace or a Custom Resource Definition (CRD) have to be created before other resources can be placed into that namespace or a custom object of the CRD’s kind can be created. Namespace creation is generally quick and rarely causes problems, but you have to make sure to create the namespace first by ordering the resources accordingly.

CRDs, on the other hand, even if ordered correctly, tend to cause this edge case more frequently, because the API server takes time to setup the CRD’s REST endpoints. So if the CRD and one of its custom objects are sent too quickly after each other, kubectl frequently errors out.

Incomplete validation

Anything from simple validation errors to immutable changes, like changing a labelSelector, can cause applying configuration to fail. And cause the issue with partially applied state.

To validate the configuration before sending it, kubectl has a --dry-run flag. This dry run, however, is entirely client side. So while it can validate the configuration against the schema, at least for built-in resource kinds, it won’t be able to prevent errors from changes to immutable fields.

From Kubernetes 1.13 the server side dry run beta, available using the --server-dry-run kubectl flag, would go through all the steps server side except actually changing the desired state. This means it also would include schema validation for CRDs, webhooks for admission controllers, and be able to detect immutable change errors beforehand.

Still, determining if the new configuration includes an immutable change is just the first step.

New call-to-action

Changes to immutable fields

The next step is to make sure this resource is deleted and recreated. This is possible with apply’s --force flag, which will fall back to deleting and recreating this resource, after patching it failed.

Deleting and recreating will often cause applications to become unavailable, so always using --force is not a viable option.

Teams could try building a pipeline that mirrors a Terraform plan/apply-like workflow using kubectl, with a server-side dry run as the plan and the forced kubectl apply as the terraform apply.

Disciplined teams can carefully review the planned changes to understand if a change could induce downtime using a manual review and approval step. While possible, getting this logic right in a pipeline or potentially using bash can become quite complex and may be undesirable to maintain long term.

Pruning of previous resources

Last but not least, let's talk about pruning resources. Assume we have a deployment and a configmap that is referenced by that deployment. If we now want to remove the configmap, we can delete it locally and remove the reference to the configmap from the deployment spec. Running kubectl apply for this new configuration, kubectl will update the deployment so it won’t reference the configmap anymore, but it will not delete the configmap from the cluster.

Apply does have a --prune flag to handle this case. But be careful, because it is explicitly marked alpha and will possibly not do what you expect it to.

In the example above, kubectl has no way of knowing that the previous configuration included a configmap and the current one does not anymore. What it can do is to query resource kinds, defined in a whitelist, by label and namespace, and then purge all resources in the result of that query. You can overwrite the whitelist with the --prune-whitelist parameter.

Getting the label and namespace parameters right can be a bit tricky to say the least. Neither label nor namespace are prune-specific parameters. They also affect ‘apply’ itself. Meaning resources from the configuration without the label would not be updated. Meanwhile, the namespace parameter, if not aligned with whatever is defined in the manifest, would overwrite the value from the manifest.

So carefully controlling what --prune can delete is really important and in its current state I don’t think it can be done reliably in an automated way.

Conclusion

While kubectl is omnipresent and just works in many cases, there are certain edge cases to be aware of.

A continuous deployment pipeline that frequently fails mid-way and requires manual attention loses most of its value. For small, experienced teams it may be a viable approach to have everyone be aware of the edge cases and handle them manually when necessary. But building robust deployment automation around kubectl does require prohibitively complex wrapping logic. At this point the logical question is, isn’t there a purpose built tool for the job?

In the second part of this series, we’ll be looking at how to build robust application deployment automation using Terraform and Kustomize using a new Terraform provider for Kustomize. The provider was originally built for the open source Gitops framework Kubestack but we will be using it standalone for our application deployment use-case.

 

New call-to-action

Comments
Leave your Comment