Cloud native, Kubernetes, Build

Deep Dive: Deployment Automation for Applications on Kubernetes (Part 2)

This post is the second part of a series. Read the first part here.

In the first part of this deep dive, we looked at kubectl and how it is a quick and easy way to do deployments but how certain edge cases make it difficult to build robust automation on top of it. In this second part, we want to take a look at using Kustomize and Terraform for application deployment automation, and why we need both.


Why Kustomize and Terraform?

Being able to have multiple environments is an almost ubiquitous requirement for applications. The exact number of environments and their names tend to differ between teams and applications. But every team usually has more than one environment. Kustomize and its inheritance model are purpose-built to support this use case.

Application configuration can be tracked in a base that all environment overlays inherit from and overwrite the environment-specific parts. This gives us a nicely declarative and beautifully simple way to maintain our Kubernetes manifests in a repository.

So that’s why we use Kustomize.

But Kustomize only customises the configuration and leaves it to kubectl to apply the configuration to a cluster. This would leave us with the same edge cases.

And that’s why we need Terraform, because it’s very good at solving those very edge cases. Terraform’s core feature is to determine a plan how to get the resources to match the new configuration—based on the configuration you give it, the information it tracks in its state, and the current resources in the cluster. For each resource, the plan will determine if a create, a delete, or an update is required. And in the update case, it can either update the resource in place or delete and recreate it, if necessary.

To support different cloud providers or platforms like Kubernetes, Terraform requires a provider. The official Kubernetes provider defines each resource’s schema in Terraform. This is a lot of effort, which leaves some Kubernetes resources unsupported and does not work at all for others, such as CRDs.

Another, arguably debatable, issue with the Terraform Kubernetes provider is that it requires users to specify their resources in Terraform’s HCL format instead of Kubernetes’ YAML.

Without a schema for each resource we are not able to use Terraform to modify our Kubernetes manifests. But we have Kustomize for that already. All we need is a provider that allows us to replace kubectl after Kustomize has done its part.

Terraform Provider Kustomize

To get the best of both worlds, we combine Kustomize for customising our Kubernetes manifests for different environments and Terraform for a robust way to automatically apply changes to the cluster by using the Terraform provider for Kustomize.

This provider, given a path to a Kustomize base or overlay does the ‘kustomize build’ and can, using the dynamic client-go, handle any resource’s kind from the Kustomize output.

Under the hood, the provider determines if a resource needs to be created, deleted, updated in-place or deleted and recreated using a server-side dry run. This prevents the incomplete validation and changes to immutable fields edge cases. Pruning of previous resources is also handled, by tracking the previously sent configuration in the Terraform state.

The resulting plan allows reviewing the changes per resource before applying them. While applying, the provider hooks into Terraform’s ability to handle eventual consistency to prevent the resource dependency edge case, both when creating or deleting a resource.

Because Terraforms was built to handle edge cases like this, it is comparatively less complex to build robust application deployment automation with it.

Putting Everything Together

Assuming we have a repository with our application base and our environment overlays similar to the one below.

├── bases
│   └── app
│       ├── configmap.yaml
│       ├── deployment.yaml
│       ├── kustomization.yaml
│       └── service.yaml
└── overlays
    ├── prod
    │   ├── ingress.yaml
    │   ├── kustomization.yaml
    │   └── namespace.yaml
    └── test
        ├── basic-auth-secret.yaml
        ├── ingress.yaml
        ├── kustomization.yaml
        ├── namespace.yaml
        └── patch_app_url.yaml

To deploy this using Terraform we have to:

1. Put the following HCL into a file named in the repository root:

terraform {
  # use any remote state storage you want
  backend "gcs" {
    bucket = "UNIQUE_BUCKET_NAME"

data "kustomization" "current" {
  # using the workspace name to select the correct overlay
  path = "manifests/overlays/${terraform.workspace}"

resource "kustomization_resource" "current" {
  # use the new for_each to handle each resource individually
  for_each = data.kustomization.current.ids

  manifest = data.kustomization.current.manifests[each.value]

2. Get the provider binary and make it executable:

$ curl -LO
$ mv terraform-provider-kustomization-v0.1.0-beta.3-linux-amd64 terraform.d/plugins/linux_amd64/terraform-provider-kustomization
$ chmod +x terraform.d/plugins/linux_amd64/terraform-provider-kustomization

3. Create two Terraform workspaces, one for the test and one for the prod environment:

$ terraform init
$ terraform workspace new test
$ terraform workspace new prod

With that in place, you can select the environment to deploy to by changing the Terraform workspace. For example,  terraform workspace select test and then deploy using terraform apply.

If you’re migrating an application that has previously been deployed using kubectl apply, you need to import the existing Kubernetes resources into Terraform’s state once, before running terraform apply.

You can do so by running terraform import for each resource like below. Please note the single quotes around the resource and id:

$ terraform import 'kustomization_resource.current["apps_v1_Deployment|test|app"]' 'apps_v1_Deployment|test|app'


Combining Kustomize and Terraform gives us a fully declarative, Kubernetes native way to maintain our desired configuration in a repository while being able to rely on Terraform’s robust plan/apply workflow to avoid the edge cases we identified in part one.

Of course there are alternative tools. But if you’re already familiar with the Terraform ecosystem not needing to master yet another tool sounds like a great idea to me. For an example of how the Kustomize provider can be used in a bigger context, take a look at the open source Gitops framework Kubestack and how it uses the provider to maintain cluster services.

New call-to-action

Leave your Comment