Cloud Native Blog - Container Solutions

Some Admission Webhook Basics

Written by Jason Smith | Jul 10, 2018 4:31:16 PM

Admission Webhooks are a new feature in Kubernetes since 1.9 that allows you to intercept manifests prior to them being deployed.  This gives you a lot of control to do things like inject sidecars, attach volumes, or validate image repositories before the object gets deployed.  I took some time over the last two days to explore this feature and how to implement it. Let me share what I have learned…

Requirements

As of this writing this you need a cluster running Kubernetes 1.11 or better.  You may say, “but it has been supported since… 1.9” You would be correct… BUT prior to 1.11, if you sent a malformed request you could potentially crash your kube-api server.  This is probably not the most ideal situation to be in, and could really hinder your development efforts.

So, you need a running cluster. Do you intend to run minikube? Well as of this writing you will need to compile that from source because the latest release is not compatible with 1.11.  If you want to get minikube running with Kubernetes 1.11, you can clone the minikube repo and run make, and it should produce ./out/minikube

This entire tutorial is based off this demo repository.  So you should clone it and work from inside it.

Finally, I highly recommend you download json-patch cli utility.  This is the same package Kubernetes uses to apply its patches, and the cli will help you write your patches before writing your webhook.

Setup

Assuming you are using minikube you can run:

  
minikube start --kubernetes-version v1.11.0

Let’s start by just running the pause application.  From the demo repository, run:

  
kubectl apply -f test.yaml

Patches

So right now Kubernetes only currently supports jsonpatch for mutating objects.  We can play around with our patches prior to actually writing our webhook, to see if what we want to do is going to work and that our patch is correct… so we don’t crash our kube-api server.

Currently, we have the pause container running in mwc-test namespace.  In the demo repository, you will find a folder titled “jsonpatchtests”.  Inside it, we have two patches, one that adds a label to the labels object, and another that creates a labels array under the labels object.

When Kubernetes returns a pod object, if the pod has no labels, the labels object is not passed back in the json of the pod definition.  With my limited understanding of jsonpatch if I add an array and one exists, it will overwrite the existing labels array. If I add one to the labels path and it does not have any labels it will complain that the path “/metadata/labels” does not exist.

Feel free to play around, by removing the labels from test.yaml and applying the patches or make your own.  Below is an explanation of how to do this.

I am new to jsonpatch, and I am still learning, I welcome commenters suggesting better methods.

Test A Patch

So we can test a patch straight from the command line by just piping an object definition straight into json-patch.

I will be using jq through these commands because it offers pretty output.

So if we run a patch like this:

  
kubectl get pod -n mwc-test pause -o json | json-patch -p jsonpatchtests/patch3.json | jq .

We can see the patch was applied to the json

  
{
 "apiVersion": "v1",
 "kind": "Pod",
 "metadata": {
     "labels": {
       "test": "label",
       "thisisanewlabel": "hello" // <--This was added… Yay!
   },
 }
}
	
	

So now we have a working patch, we can copy that directly into our webhook.

The Parts of a Webhook

The Webhook is a pretty simple setup:

  1. Http server to actually admit or mutate the object
  2. A Deployment for the http server
  3. A Service for the http server
  4. And a MutatingWebhookConfiguration

2, 3 and 4 can be found in the demo repository in manifest.yaml.  2 and 3 are pretty self-explanatory, so we will focus on 1 and 4.

Anatomy of the Webhook Server

So I based my code off the e2e tests Kubernetes suggests to use as a launching point.  I found some cruft in that Go code and weird naming conventions were causing me more confusion than helping.  So I wrote a simpler example in Go that had long names with better comments.

We can find this in main.go.

I will not go through line by line but I will explain the basic concepts here.

  1. Kubernetes submits an AdmissionReview to your webhook, containing
    1. an AdmissionRequest, which has
      1. a UID
      2. a Raw Extension carrying full json payload for an object, such as a Pod
      3. And other stuff that you may or may not use
  2. Based on this information you apply your logic and you return a new AdmissionReview
  3. The AdmissionReview contains an
    1. AdmissionResponse which has
      1. the original UID from the AdmissionRequest
      2. A Patch (if applicable)
      3. The Allowed field which is either true or false

 

That is pretty much it.  The above objects have a lot more data, but I am just focusing on the ones I am using in the main.go example.

 

The MutatingWebhookConfiguration

The configuration is pretty straightforward


apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
 name: mwc-example
webhooks:
 - name: mwc-example.jasonrichardsmith.com # this needs to be unique
   clientConfig:
     Service: # The below targets the service we deployed
       name: mwc-example
       namespace: mwc-example
       path: "/mutating-pods"
     caBundle: "${CA_BUNDLE}" # This will be inserted from ca files that we generate
   rules:
     - operations: ["CREATE","UPDATE"] # This is the actions to trigger on
       apiGroups: [""]
       apiVersions: ["v1"]
       resources: ["pods"] # These are the objects to trigger for
   failurePolicy: Fail # This is what happens if something goes wrong
   namespaceSelector:
     matchLabels:
       mwc-example: enabled  # The label a namespace must have to invoke the webhook
	
	

There and back again

So let’s try it out.

I am going to assume we are working on minikube.  To deploy to a real cluster, you will want to change the REPO to one you own in the Makefile and the image referenced in the deployment in manifest.yaml.

So to run this we will do the following.

First, tear down the pause pod you deployed.

  
kubectl delete -f test.yaml

Build the webhook image

  
make minikube

This only requires make, minikube and Docker.  This will build the image on the minikube server so we do not have to push it to a repo.

If you are using your own repo you can run the below command, after editing the Makefile REPO variable.

  
make && make push

Secrets and Certs

This whole process will require Secrets and Certs.  I stole and slightly altered a bash script from the Istio sidecar injector to demonstrate this.  I am not going to get into what this is doing, because it is out of scope.

First, create the namespace:

  
kubectl apply -f ns.yaml

Then generate certs and secrets

  
./gen-certs.sh

We will take the cert we created and stick it into the manifest for the webhook.

  
./ca-bundle.sh

This will produce a new manifest manifest-ca.yaml.  We can deploy that. If everything went well, we should see this:

  
kubectl apply -f manifest-ca.yaml
(out) service/mwc-example created
(out) deployment.apps/mwc-example created
(out) mutatingwebhookconfiguration.admissionregistration.k8s.io/mwc-example created

Now we can deploy the test.yaml.  Which if you inspect you will see the namespace has a label

mwc-example: enabled

which was required for our MutatingWebhookConfiguration to apply the webhook to a namespace.

  
kubectl apply -f test.yaml
(out) namespace/mwc-test created
(out) pod/pause created

You should get this response

The Big question is … Did it Work?

  
kubectl get pods -n mwc-test -o json | jq .items[0].metadata.labels
(out) {
(out)   "test": "label",
(out)   "thisisanewlabel": "hello"
(out) }

Credit where credit is due

This blog post stole from, was inspired by, the following content:

IBM’s article on Mutating Admission Webhooks

The Istio repo

Kubernetes e2e test

The Kubernetes Docs

Thanks for reading! 

Read more about where we think Kubernetes is in its lifecycle in our whitepaper.