skip to Main Content

I have the following containers:

  • nginx:latest
  • myapp container (derived from php-fpm:alpine)

Currently I have a dummy project with CI pipeline in place which, build-time, compiles production variant of resources (images/js/css,…). Build files end up in (/public/build). At the very end of CI pipeline, I package everything into Docker images and upload it to Hub.

Both nginx and myapp do have volume (not bind mount) set up and pointing to /opt/ci-test/public/build.

This works, for the first time.

But let’s say that I add a new file new.css – my new version of docker image will contain a build variant of new.css.

Running a new container with pre-existing volume does not reveal new files and I understand that it should not.. I can create a new volume my_app_v2.

At this point nginx does not see this new volume and it must be removed and re-run (with new volume) for it to take effect.

Is there an easy way to overcome this?

My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?

EDIT:

One workaround I have managed to dig out is to remove all files from attached volume and start new myapp container. This mirrors all the latest files to the volume. But this feels dirty…

EDIT2:

Related issue (case 3): https://github.com/moby/moby/issues/18670#issuecomment-165059630

EDIT3:

Dockerfile

FROM  php:7.2.30-fpm-alpine3.11

COPY . /opt/ci-test
WORKDIR /opt/ci-test

VOLUME /opt/ci-test/public/build

So far, I do not have docker-composer and I run the containers manually via commands:

docker run -it -d --name php71alp -v shr_test:/opt/ci-test/public/build -p 9000:9000 <myaccount>/citest
docker run -it -d --name nginx -v shr_test:/var/www/citest -p 80:80 nginx:latest

2

Answers


  1. First option: don’t use a volume. If you want to have the files accessible from the image build, and don’t need persistence, then the volume isn’t helping with your workflow.

    Second option: delete the previous volume between runs and use a named volume, which docker will initialize with the image contents.

    Third option: modify the image build and container entrypoint to save the directory off to a different location during the build, and restore that location into the volume on container startup in the entrypoint. I’ve got an implementation of this in the save-volume and load-volume scripts in my base image. It gets more complicated when you want to merge the contents of the volume with the contents of the host, and you’ll need to decide how to handle files getting deleted and what changes to save from the previous runs.

    Login or Signup to reply.
  2. Simply do not use a volume for this.

    You should treat docker images as “monolithic packages” that contain your dependencies (nginx) and your app’s files (images, js, css…). There’s no need to treat your app’s files any differently than nginx itself, it’s all part of the single docker image.

    Without a volume, you run v1 of your image, nginx sees the v1 files. You run v2 of your image, nginx sees the v2 files.

    Volumes are intended to be used when you actually want to keep files between container versions (such as databases, file uploads…). Not for your site’s static assets.

    My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?

    Yes, this is bad design. If you want to run multiple apps, you should run 1 Docker container per app. That way, when you release a new version of one app, you only need to restart that container. Containers aren’t supposed to be treated like traditional virtual machines where you “SSH into” and manually configure things. Containers are “throw-away”. New version of the app? just replace the container with a new one with a newer image.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search