Docker 1.3 was released on the 16th of October and top of the announcements was a "tech preview" of digital signature verification. In this post, we'll have a look at what this means and why it's needed.
To date, most of the security discussion around containers has centred around how "contained" they are; can a malicious user break out of a container and gain access to the host? Whilst undoubtedly important, it has overshadowed another concern - why should I trust the code (or data) in a container at all? If it wasn't me, how do I know for sure who built it?
It's easy to see the importance of this if we think about what could happen if we run a malicious image. If we assume that Docker containment is entirely secure, it shouldn't be possible for code inside the container to access confidential files on the host or to do permanent damage to the host. We cannot however place any trust in the services running inside the container. In the case of a database container we cannot trust the data will not be leaked or tampered within some way. In the case of a compiler container (e.g. the official Java or Go containers), a malicious container can inject whatever code it likes into your software. This is a horrendously bad situation; a rogue compiler is free to inject backdoors, turn off encryption, corrupt data etc. To make matters worse, it's almost undetectable - sure, you could analyse the assembly, but how many times have you done that?(1).
So what's the answer? Actually, there isn't a foolproof one. You can't be completely sure of your existing OS, compilers or even hardware - after all you (probably) didn't write them. But we have to get things done, so we put our trust into various organisations and companies. Microsoft, IBM, Oracle, Debian, Docker etc all stand to lose a lot if they ship compromised or malicious software. This leads us to a second, more tractable, question - how do I know that the software I've downloaded or been given really comes from the company or organisation it claims to? This issue is commonly known as provenance and one answer is Digital Signature Verification (DSV).
The basic idea of DSV is to create a unique and verifiable "signature" for a given payload (be it a container, software package or simply a message). This signature can be decrypted into a secure hash by using the public key of the organisation (which must have been previously obtained in a trusted fashion). If we then compute the secure hash of the payload ourselves, assuming the hashes match we can be sure the payload hasn't been tampered with and originates from the organisation in question.
Docker 1.3 makes the first baby steps towards adding provenance. At the moment there are no commands to play with, but if you pull a certified Docker image (the ones with no user namespace) from the Hub, you should see something similar to the following:
$ docker pull nginx
nginx:latest: The image you are pulling has been verified
5a7d9470be44: Pull complete
feb755848a9a: Pull complete
Note the "has been verified" message, which indicates DSV has succeeded and we can be sure the image does indeed come from Docker(2). (For the moment, verification failures will print a message but otherwise will be ignored).
In future releases of Docker, users will be able to sign their own containers, allowing non-Docker certified containers to be verified. For the moment though, if it's not from a certified image, you'll just have to hope that container you're running is what it says it is...
(2) Docker seems to currently download the Docker certificate from a Cloudfront URL via https, which effectively places Cloudfront in the chain of trust. I would expect this change in future versions of Docker.
More blogs? Sign up!