Kubernetes, Microservices, Build

Using Helm and Kustomize to Build More Declarative Kubernetes Workloads

With declarative infrastructure, what you define is what gets set up on your system. When software engineers use Helm, the package manager for Kubernetes, in building a Cloud Native system, they tend to believe that specifying a value.yaml file is being ‘declarative.’

Spoiler alert: It isn’t.

The main problem is that, with Helm, these values get injected into templates at runtime, meaning the system could diverge from what you expect, if the templates change. Also, the templates aren’t generally kept in the same repository as the value.yaml so when trying to figure out what is being deployed, you have to go chart hunting.

Let’s face it, Helm templates are complex; it can be very hard to figure out what is going on. ‘Look at that beautiful Helm template,’ said nobody, ever.

The Inarguable Upside to Helm

So why use Helm, then? Well, believe it or not, putting together all the resources needed for an application (deployments, service, ingress, validation hooks, etc.) is a lot of work. My default Nginx-Ingress install has 11 resources. Remembering all that for each application is difficult. And when you start including all the configurable properties (env, args, commands, and so on) it’s almost impossible to do this every time.

This is where Helm shines. It allows you to set ‘sensible’ defaults that are configurable via the values if needed, Making installing most applications very simple.

However, this feature does come with a downside: upfront visibility and transparency is lost. You can’t generally figure out what a Helm chart has installed until it is up and running on your cluster, making for huge security problems (the same security issues that you can get when installing pip or npm packages).

New call-to-action

Adding Kustomize

So what do we do about this? How do we keep the awesome magic of Helm while also, if we want to use methodologies like GitOps, finding a more declarative way?

This is where I suggest using both Helm and Kustomize, a Kubernetes native templating management tool, in conjunction with each other. Helm has a handy templating feature that allows you to template out all the resources that you can, then easily specify in a Kustomize base. The steps are straightforward, and this GitHub repo can be used as a reference.

Step 1: Helm Fetch

This is where we go and fetch the chart that holds all the templates we will be using and storing it locally, the next templating command needs the chart locally.


mkdir -p charts 
helm fetch \ 
--untar \ 
--untardir charts \ 
stable/nginx-ingress

Step 2: Helm Template

Template out the yaml into a file. This is the step where you add the values to the chart and also set the namespace (more on this later). This step is generally handled by the tiller component—or in Helm 3, the Helm client.


mkdir -p base
helm template \
--name ingress-controller \
--output-dir base \
--namespace ingress \
--values values.yaml \
charts/nginx-ingress

 

This should give you a folder with a whole bunch of Kubernetes resources:


tree base/nginx-ingress/templates/
base/nginx-ingress/templates/
├── clusterrole.yaml
├── clusterrolebinding.yaml
├── controller-deployment.yaml
├── controller-hpa.yaml
├── controller-role.yaml
├── controller-rolebinding.yaml
├── controller-service.yaml
├── controller-serviceaccount.yaml
├── default-backend-deployment.yaml
├── default-backend-service.yaml
└── default-backend-serviceaccount.yaml
0 directories, 11 files

 

Just to neaten things up, let’s move these up a directory and delete the template directory:


mv base/nginx-ingress/templates/* base/nginx-ingress && rm -rf base/nginx-ingress/templates

Step 3: Create the Kustomization Config

One thing that is not very well known is that Helm does not handle namespaces very well. When you define ‘--namespace’ while running ‘helm install’, Tiller does all the namespace work at runtime. Tiller does not specify the namespace on any of the resources, meaning that to be more declarative, you will need to create your namespace config manually:


cat <<EOF > base/nginx-ingress/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress
EOF

 

You will need to create a kustomization.yaml that lists all your resources. This is probably the most time-consuming part of this whole effort, as this will require you to go through the resources. But it’s important, as it allows you to go through what is going to be added to your cluster. You can also add some common labels, secret generators, and loads more.


cat <<EOF > base/nginx-ingress/kustomization.yaml
namespace: “ingress”
commonLabels:
roles: routing
resources:
— namespace.yaml
— clusterrole.yaml
— clusterrolebinding.yaml
— controller-deployment.yaml
— controller-hpa.yaml
— controller-role.yaml
— controller-rolebinding.yaml
— controller-service.yaml
— controller-serviceaccount.yaml
— default-backend-deployment.yaml
— default-backend-service.yaml
— default-backend-serviceaccount.yaml
EOF

Step 4: Apply Your New Base to a Cluster

As of Kubectl 1.14, Kustomize is integrated. Therefore, you can simply run:


kubectl apply -k base/nginx-ingress

Conclusion

Yes, this is a lot more work than just running helm install, however, the transparency you gain is worth it. As in any system, you don't want any unknowns lurking in the dark.

Once you have grasped this concept I would suggest going to have a look at GitOps. It will change the way you handle operations.

Kubernetes Deployment Strategies

 

Comments
Leave your Comment