Last week, Adrian Mouat, Docker captain & author of “Using Docker” gave a webinar on using Docker to secure your microservice containers. The webinar was a teaser for a 2 day training session by Adrian and Sam Newman (author of “Building Microservices” in Amsterdam on the 31 July and London on the 30 August. I’ll include booking links below, there are still a couple of spaces spare on both courses.
Adrian covered so much stuff we’ve had to break this into a two-parter write-up. In the first part we talked about the basics of a healthy Dockerfile and in this second part we’ll talk about safe deployment.
Avoiding Massive Attack
When it comes to security in production, according to Adrian we have several things to worry about:
- Getting the image there securely.
- Operating containers securely in the production environment.
Push It Real Good
Adrian discussed two ways to push images safely from a security perspective:
- Use DCT (Docker Content Trust)
- Use Digests
Docker Content Trust is probably the easiest way to push images securely assuming you are using the Docker Hub. With DCT the first time a signed image downloads you’re prompted to accept the certificate. After that it will just download that image without further prompts or warnings. There is one downside of DCT, which is that it relies on the image you’re downloading being signed, but not all public images on Docker Hub are signed! That aside, if you can find good, signed images DCT has the benefit of doing freshness checks on them. Freshness checks guarantee that you’re not being evilly served an image that was correctly signed in the past, and is therefore cryptographically valid, but was subsequently replaced by the signer because it contained a security flaw (feeding you an old vulnerable executable is called a rollback attack). Freshness checks ensure you are getting an up-to-date and valid image not merely a signed one.
Digests are an alternative to DCT but are more fiddly. Digests are a way to refer to images using an immutable hash (sha256:digest_value_in_hexadecimal).
You can list the digest of an image by specifying the --digests option to the docker images command BUT THAT WON’T DO IT FOR YOU HERE! Adrian says, that will give you a digest that you can use locally, but if you push that image to a registry, that will change the metadata and hence the digest will change. D’oh! After the push, you can get the new correct digest from Docker inspect under "RepoDigests" i.e: `docker inspect -f {{.RepoDigests}} image_name`
You can use a digest (sha) with the docker create, docker pull, docker rmi, and docker run commands and with the FROM instruction in a Dockerfile as we described in part 1.
If you want to guarantee that exactly what you build in your CI/CD system is what gets pulled into production then basically you can get your build system to create a Digest then you can pass this Digest to your production environment by some mechanism and pull there using the sha. Specifying an image by its Digest (sha) at pull time means you know exactly what you’re getting and you can be certain of a few things:
1) no-one has tampered with the image either during transfer or at rest in the registry
2) staging or prod is definitely running the image built in this invocation of the CI/CD system and not another, potentially concurrent one (proper labelling also solves this)
Aside - Let’s face it, this is a bit homemade (you write a script to create a Digest at build time and then you get the Digest safely from your CI/CD server to production to execute at deploy time). Generally anything homemade for security will probably contain a flaw, so that’s not ideal. However, at the moment most people just do a `docker push` and `pull` without considering these issues at all, which is a worse situation security-wise.
Run DMZ
When it comes to running securely in production, there are several things you need to consider:
- Minimizing vulnerabilities in production images.
- Defense-in-depth at runtime.
- Managing your secrets effectively.
That’s Just The Way It Is
The most important part of keeping production secure is identifying vulnerabilities in images and updating software versions to remove those vulnerabilities (note that in the container world we are no longer literally applying patches, we’re completely replacing images, but I'm going to call it patching because we know what we mean). It’s dull, we all know it, but there’s no getting round it.
We need to automate patching or it just won’t happen. Fortunately, there are tools to help (watchtower for example).
In addition, we need to automate vulnerability scanning. The vast majority of images on Docker Hub have vulnerabilities - even popular base images! That just shows we need to be careful. Tools that can help with scanning include: Clair from CoreOS, Docker Security Scanning, Neuvector, Twistlock and Aqua. Most of these will integrate with your workflow.
A good way to reduce the patching you need to do is to use minimal distros for your base images. For example, Alpine may be a better base than Debian because it’s smaller, which means less attack surface.
As an aside, binary-only containers with no OS requirements (like compiled Go applications) are particularly minimal with a very small attack surface.
Aside II - your container is not the only thing in your environment that can be hacked. Host OS security is also really important and the same advice applies. Keep it patched!
Talk This Way
If you want to be secure in production another good idea is to be proscriptive about how services communicate with one another.
This is part of defense-in-depth, i.e. stopping the infection of one service spreading to others. Basically, if 2 services don’t need to communicate then don’t let them. You can do this in a simple way with Docker Compose by setting up multiple networks and using them to segregate services. Or you could do something more sophisticated with a product like Calico from Tigera or WeaveNet from WeaveWorks or Aqua from AquaSec.
Defense-in-depth isn’t just about network policy, it’s also about limiting the capabilities of your containers to only the functionality that they need. The most important limitation is to use a read-only filesystem for your container as far as possible (you may need a couple of r/w files but you can do this by “poking holes” through your read-only FS, as Adrian will explain further in his workshops). Even better, consider using a virtual filesystem tmpFS volume that will disappear automatically when your container stops.
Another thing to consider is limiting the resources available to your container. By default, a container can use all the resources (e.g. memory) on the Host machine, but you do have the ability to limit what resources it can use at runtime via Docker Run. If you want to get super-fancy you can even limit what syscalls a container is allowed to make via Linux capabilities and seccomp which can again be configured with Docker Run. For most of us, however, configuring seccomp etc will be overkill. Address the lower hanging fruit first.
For securing your running systems, monitoring is vital. There are dozens of solutions available for this, but we think Prometheus is particularly worth a look.
It’s Tricky
One of the most notoriously fiddly bits of security is maintaining and sharing secrets across your distributed system. Secrets like those API tokens your implausibly attractive other half is always asking about.
You could stick all your secrets in environment variables but that isn’t very secure. Environment variables are too accessible and anyway it’s a bit homemade, which as we’ve discussed is not a positive attribute in a security system.
There are several tools that can help with secrets management. Swarm from Docker and Kubernetes both have OK built-in secret handling now but Vault from Hashicorp is excellent and well worth looking at (it will require extra work to integrate into your workflow).
Get Ready for This - It’s a Checklist
There’s a lot to do above to secure your running systems but some bits are easier and more useful than others so we must focus on them first. In order of effectiveness, therefore, you must do the following:
- Run a read-only filesystem
- Scan for vulnerabilities and update the images
- Do network segregation
- Run minimal container distros
And at lower priority:
- Use Vault for secrets
- Limit access to resources from your containers
- Limit the capabilities of your containers (seccomp)
- Run a minimal host
- Use an enhanced security distro
Can Hackers Touch Your Production Systems?
Remember, nothing is 100% secure and security can be somewhat intimidating. Basically don’t panic but don’t do nothing. Good luck!
What next?
Read more about our work in The Cloud Native Attitude.