Two week's ago, at our Docker Randstad meetup dedicated to Docker security, I realised that we we need to stop building generic applications that assume the existence of a fully functional OS environment and start building applications that are adapted to the restricted environment available inside a container.
Containers as fast VMs
At the meetup, Container Solutions’ Adrian Mouat and Docker’s Diogo Mónica both explained ways to improve the security of applications running inside docker containers.
One suggestion in both talks got me thinking that we are not doing it right, yet.
The suggestion was to drop kernel capabilities when starting containers. This is a very smart approach, but very impractical at this point as no-one really knows which capabilities are used by applications such as Nginx or Wordpress or even the ones we have built ourselves.
To solve this problem, both Adrian and Diogo contemplated the possibility of running a full test suite on the relevant application in a debug environment, which would catch all the kernel calls using some kind of proxy sitting in between the kernel and the running containers.
This approach may help you to map most of the kernel calls but you can never ensure full coverage as your tested application may do something unusual once in a while that will slip through the evaluation and bring down your system in production. And probably at the worst possible time for your business.
Just to clarify, I think the way Docker was introduced to the masses was absolutely correct. We just had to take our existing applications and put them inside containers. Anything else would have created tremendous difficulties for the mass adoption we are seeing right now.
In the world on VMs, access to the kernel isn’t a significant security risk as the kernel isn’t shared with other applications and the hypervisor can prevent attempts from malicious applications to escape the VM and damage other VMs running on the same host.
In the container world, cgroups and namespaces are the last line of defence and to reduce the risk of malicious attack of one container by another, we need to take as much ammunition as possible from the possible attacker.
So, dropping capabilities is a great idea, as many kernel calls available for every Docker container by default are not really needed by most applications and in fact create unneeded security risks.
Docker and the community are clearly thinking in the same direction and came up with the idea of security profiles that will be introduced in a future version of Docker. Such profiles will be attached to each image and will limit the available functionality within the running containers.
The challenge would be to adapt the applications to these security profiles and ensure that they will never try to do anything not covered by the profile.
Building applications for containers
To be able to use security profiles effectively we need to start building applications inside containers with security profiles applied from day one. Start from a minimal possible set of functionality and explicitly expand the profile when new functionality is needed. This is not really feasible today as kernel capabilities are not easy to identify and sometimes aren't fine grained enough.
I believe a new set of tooling will be required to play with these capabilities and allow developers to chose them effectively. Such tools can be integrated into IDEs and be part of the the build process.
The result we will get is a new generation of very slim applications aware of their environment and capable of keeping themselves within the restrictions enforced by the execution environment. If different levels of security are applied by the environment they will also be able to drop some of the functionality to remain within required boundaries.
As an important side effect, images for these applications will only bring the binaries needed from their proper functioning and stop using full-blown generic operating systems. By doing this they remove even more potential security risks whilst improving the efficiency of the distribution and deployment.
Summary
We all know that containers are not perfect yet, and we also understand that we only see the tip of the iceberg of the changes brought by the containers.
While we are still learning this technology by applying it to our current application without significant changes, we can already see the coming possibilities. For example, microservices would not be possible without effective deployments and agile systems management on a large scale but microservices based on existing general-purpose applications is only an intermediate solution. The next step is container specific applications that are appropriate to microservices that only carry a very limited set of files and require a small subset of kernel functionality.