Kubernetes v1.9.0 release includes webhook admission controller.
In this post I’ll run a quick overview on how to create, test and deploy your webhook validation admission controller in Kubernetes.
This blog post is meant to complement Mutating webhook controllers blog post.
Firstly, why would anyone bother with writing an admission controller?
Consider large multi-team, possibly multi-tenant clusters. The might be subject to various company policies.
For example:
- Don’t run services with more than N replicas (team limit)
- Don’t use the “latest” tag for deployment
- Enforce an annotation or a label on a resource to be admitted
- etc.
All of the aforementioned cases are achievable with validation admission controllers.
The possibilities are virtually unlimited because you get a full definition of an object to be admitted and have a full control over the admission process.
With webhook availability, it’s only required to deploy a web server using Kubernetes itself. Easier than ever!
While the validation process is very simple, the configuration is the tricky part, since Kubernetes only communicates through HTTPS you'll have to manage CA and server certificates.
Let's start with the basics.
The Validation admission controller receives the resource request after it passed authentication and authorization, but before it's admitted into the cluster.
You can use annotations, labels or any other aspect of an Object admitted to the cluster.
Let's look at some Go code of how that would work.
func (*NamespaceAdmission) HandleAdmission(review *v1beta1.AdmissionReview) error {
review.Response = &v1beta1.AdmissionResponse{
Allowed: true,
Result: &v1.Status{
Message: "Welcome aboard!",
},
}
return nil
}
This is as simple as it gets.
We take an admission request, and pass it through by supplying a Result message and setting Allowed flag to true.
The boilerplate
As it goes, the devil is in the details.
A webhook Validation admission controller has to expose an endpoint. Additionally it has to be served over HTTPS.
Let's do that.
import (
"k8s.io/api/admission/v1beta1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/apimachinery/pkg/util/json"
)
var (
scheme = runtime.NewScheme()
codecs = serializer.NewCodecFactory(scheme)
)
// [...] omitted for brevity
func GetAdmissionServerNoSSL(ac AdmissionController, listenOn string) *http.Server {
server := &http.Server{
Handler: &AdmissionControllerServer{
AdmissionController: ac,
Decoder: codecs.UniversalDeserializer(),
},
Addr: listenOn,
}
return server
}
Wait a minute, you might object, “but you said there would be se SSL? Why does it say GetAdmissionServerNoSSL?”
Patience, aspiring Kubernetes guru, we'll come to this.
Let's wrap it around a function that does create an SSL server.
// GetAdmissionValidationServer wraps around the function Non-SSL version serve in the cluster context
func GetAdmissionValidationServer(ac AdmissionController, tlsCert, tlsKey, listenOn string) *http.Server {
sCert, err := tls.LoadX509KeyPair(tlsCert, tlsKey)
server := GetAdmissionServerNoSSL(ac, listenOn)
server.TLSConfig = &tls.Config{
Certificates: []tls.Certificate{sCert},
}
if err != nil {
log.Error(err)
}
return server
}
As you can see, we need to provide it with a pair of tls certificate and key.
Now, let's create ServeHTTP method so our struct implements Handler interface.
func (acs *AdmissionControllerServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
var body []byte
if data, err := ioutil.ReadAll(r.Body); err == nil {
body = data
}
review := &v1beta1.AdmissionReview{}
_, _, err := acs.Decoder.Decode(body, nil, review)
if err != nil {
logrus.Errorln("Can't decode request", err)
}
acs.AdmissionController.HandleAdmission(review)
responseInBytes, err := json.Marshal(review)
if _, err := w.Write(responseInBytes); err != nil {
logrus.Errorln(err)
}
}
First, we'll read our request and Decode it into AdmissionReview object.
AdmissionReview will hold our ReviewRequest and ReviewResponse, into which we'll write our admission status.
Coming back to Encoding and Decoding the request. Kubernetes supports yaml and json formats, and needs to support both formats. A codec is simply a pair of an Encoder and a Decoder. A scheme is a collection of methods that facilitate encoding and decoding. Cool part about schemes is that they produce backward compatible objects, depending on the Version and Group supplied.
Decoder will transform raw request data into a Go object of supplied type, and Encoder will do the opposite.
Main handler
Let's have a look at how to bootstrap the server now that we have the internals prepared.
import (
"github.com/ContainerSolutions/validation-admission-controller-go/server"
"github.com/kelseyhightower/envconfig"
"github.com/sirupsen/logrus"
)
type Config struct {
ListenOn string `default:"0.0.0.0:8080"`
TlsCert string `default:"/etc/webhook/certs/cert.pem"`
TlsKey string `default:"/etc/webhook/certs/key.pem"`
Debug bool `default:"true"`
}
func main() {
config := &Config{}
envconfig.Process("", config)
if config.Debug {
logrus.SetLevel(logrus.DebugLevel)
}
nsac := server.NamespaceAdmission{}
s := server.GetAdmissionValidationServer(&nsac, config.TlsCert, config.TlsKey, config.ListenOn)
s.ListenAndServeTLS("", "")
}
We have a listenOn address, and tlsCert and tlsKey supplied by the file system.
Now we need to generate those certificate and add them as a volume.
We will not cover this part in a great details as it's outside the scope of this article.
In a nutshell, we'll create a CSR file and send it to Kubernetes. When it’s accepted (either programmatically or manually) we’ll retrieve TLS certificate and key pair.
Refer to this script.
It’s up to you how to manage deployment. For testing purposes you can build the image locally and set ImagePullPolicy: Never so Kubernetes always uses local image.
Then we’ll define our Service and Deployment
apiVersion: v1
kind: Service
metadata:
name: namespace-admission
namespace: namespace-admission
labels:
name: namespace-admission
spec:
ports:
- name: webhook
port: 443
targetPort: 8080
selector:
name: namespace-admission
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: namespace-admission
namespace: namespace-admission
labels:
name: namespace-admission
spec:
replicas: 1
template:
metadata:
name: namespace-admission
labels:
name: namespace-admission
spec:
containers:
- name: webhook
image: namespace-admission:latest # make sure to build and tag the image first!
imagePullPolicy: Never
resources:
limits:
memory: 50Mi
cpu: 300m
requests:
memory: 50Mi
cpu: 300m
volumeMounts:
- name: webhook-certs
mountPath: /etc/webhook/certs
readOnly: true
securityContext:
readOnlyRootFilesystem: true
volumes:
- name: webhook-certs
secret:
secretName: namespace-admission-certs
Next, we’ll define the specification of our validation webhook. It will refer to our previously defined service.
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: namespace-admission
webhooks:
- name: namespace-admission.containersolutions.github.com
clientConfig:
service:
name: namespace-admission
namespace: namespace-admission
path: "/"
caBundle: ${CA_BUNDLE} ## created by ca-bundle.sh
rules:
- operations: ["CREATE","UPDATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["namespaces"]
failurePolicy: Ignore
caBundle is a Kubernetes CA certificate to validate the request.
Rules define under which conditions your webhook admission will be triggered.
In our case, we ask to send us Namespace admissions that are created or updated.
Testing
Since we created a non-SSL version of our server, we can test our handler without fiddling too much with certificates by supplying it with AdmissionReview object defined by Kubernetes API.
Here's how we could do that:
AdmissionRequestNS = v1beta1.AdmissionReview{
TypeMeta: v1.TypeMeta{
Kind: "AdmissionReview",
},
Request: &v1beta1.AdmissionRequest{
UID: "e911857d-c318-11e8-bbad-025000000001",
Kind: v1.GroupVersionKind{
Kind: "Namespace",
},
Operation: "CREATE",
Object: runtime.RawExtension{
Raw: []byte(`{"metadata": {
"name": "test",
"uid": "e911857d-c318-11e8-bbad-025000000001",
"creationTimestamp": "2018-09-28T12:20:39Z"
}}`),
},
},
}
Now that we have a dummy Admission request we can use, let’s define a test.
We’ll create a webserver from httptest package and send request.
func TestServeReturnsCorrectJson(t *testing.T) {
nsc := &NamespaceAdmission{}
server := httptest.NewServer(GetAdmissionServerNoSSL(nsc, ":8080").Handler)
requestString := string(encodeRequest(&AdmissionRequestNS))
myr := strings.NewReader(requestString)
r, _ := http.Post(server.URL, "application/json", myr)
review := decodeResponse(r.Body)
if review.Request.UID != AdmissionRequestNS.Request.UID {
t.Error("Request and response UID don't match")
}
}
A few helper function to Encode and Decode requests to pass it to our handler:
func decodeResponse(body io.ReadCloser) *v1beta1.AdmissionReview {
response, _ := ioutil.ReadAll(body)
review := &v1beta1.AdmissionReview{}
codecs.UniversalDeserializer().Decode(response, nil, review)
return review
}
func encodeRequest(review *v1beta1.AdmissionReview) []byte {
ret, err := json.Marshal(review)
if err != nil {
logrus.Errorln(err)
}
return ret
We create a Namespace admission request with operation CREATE, and provide our raw Object.
And voila, we create a testable admission controller.
Unfortunately, end to end and integration testing is not easy feat.
Note that webhooks that produce errors or output that is not understood by Kubernetes will result in resources always getting admitted, even if they are against your admission controller policies!
To change that behaviour, consider setting failurePolicy: Ignore.
This setting will reject admission if Kubernetes can’t reach your controller.
To summarise here's anatomy of our webhook:
- Web server and a handler encapsulated in HTTPS with provided certificate and key
- Handler that takes AdmissionReview as request
- Processed AdmissionReview.Request using Kubernetes built-it serializer
- JSON marshalled AdmissionReview written to the Response object
- Generated certificates supplied in the webhook deployment manifest
Validation admission controllers is undoubtedly a powerful tool for managing your Kubernetes clusters and provide even greater flexibility when extending Kubernetes functionality.
Find all the code used in this post under this Github respository.