I leverage a Linux 32 core server as a Gitlab Runner host. All of the jobs are executed within docker and some of these jobs execute a docker build. For accessing docker in docker we use the tls backed docker-in-docker approach.
I noticed that on every build, the DinD fetches the base image specified in the Dockerfile freshly from docker-hub. While it makes sense this is not exactly desired by me as it wastes quite some time.
Is it somehow possible to share the local images between the host (root) and the build container (DinD)?
2
Answers
I ended up continuing using the DinD service without explicit mounting of the socket in the
config.toml
(I also want to try the mount approach and might edit this answer in the future). I did so as I found a guide of Gitlab that ultimately helped me to reduce the network waiting time.I've basically set up a local registry and shared it with the runner using the host IP (
hostname --ip-address
)And then made use of it in my config.toml (don't forget to restart the runner).
No. The actual image data is stored in an opaque installation-specific form in
/var/lib/docker
, and the image and container data are somewhat intermixed. Two different Docker daemons can’t share a/var/lib/docker
directory, and there’s no way to directly move an image from one Docker daemon to another.If every build launches a new Docker daemon via a DinD container, then it will start with empty local images and it will have to re-pull the base image every time. If you can reconfigure the CI system to reuse the host’s Docker daemon then this won’t be a problem.
(Pulling the base image on every build is probably a good practice, since images like
ubuntu:20.04
get routinely updated with security fixes, but in normal operation Docker will be able to tell that the image hasn’t actually changed and will only need to download the small image manifest. This isn’t an option if every build starts from an empty state.)