I need to run Pynt to automate security tests for APIs i have, I’ve a collection and an environment and it all works perfectly fine through Postman, now i want to run it on GitLab CI/CD using Newman; I’ve built a docker image to include python (for Pynt), npm (for Newman) and DinD since Pynt apparently needs it.
Using this set up i get an error that the docker daemon needs privileged access which has some security risks that concern me, my question is why does Pynt need the DinD and docker daemon environment?
2
Answers
Having docker available is a documented prerequisite to using
pynt
. This is because thepynt
CLI invokesdocker
commands to perform its function. In relevant part, the documentation states:The CI/CD documentation for pynt describes the GitLab CI configuration to achieve this assuming you are either using GitLab.com shared runners or a self-hosted docker-based executor using docker-in-docker.
There’s probably more than one configuration that will let you do this in GitLab CI, but in any case, you’ll need to make sure that you have the ability to run
docker
commands in your GitLab CI jobs to be able to usepynt
.Since you’re getting an error, this is because you are using a self-hosted runner that is not configured for use with docker-in-docker. In which case, you will need to configure this to use
docker
(and by extension to usepynt
) in your jobs. If your runner is configured to use docker using the socket binding method just remove theservices:
section from the provided example because it won’t be needed in that scenario.Pynt does not need dind. It does need docker available as it executes docker commands.
Now the docker cli is not docker – it just talks using the docker API to an instance of docker engine. Docker engine does stuff like actually running containers. And herein lies the problem – you can run docker-engine in a container – using docker:dind. But docker is a bit of a trick – it doesn’t actually run anything – it simply sets up linux kernel objects to run an isolated linux process and hands it off to linux to run.
And here we return to the problem – the docker:dind container is just that – a usermode process that needs an OS to actually run its containers. So it literally cannot do anything without being given permission to create cgroups and network namespaces on the host. And that means it needs privileged access to the host.
There is an alternative – you already have an instance of docker on the host – you can mount /var/run/docker.sock into the gitlab runner and into any containers and then their docker cli will find and call that docker instance.
This is, arguably less secure as these containers now have access to the same docker instance that hosts the runner itself. If runners go beyond simple docker build, docker push etc. actions, they could, for example, kill containers, or start compose stacks that will outlive any job as they are owned by the host docker. A transient docker:dind instance will not give access to the host and any long lived containers will be shut down when it is closed.