skip to Main Content

I’m working on a production docker compose to run my Laravel app. It has the following containers (amongst others):

  • php-fpm for the app
  • nginx
  • mysql
  • redis
  • queue workers (a copy of my php-fpm, plus supervisord).
  • deployment (another copy of my php-fpm, with a Gitlab runner installed inside it, as well as node+npm, composer etc)

When I push to my production branch, the gitlab runner inside the deployment container executes my deploy script, which builds all the things, runs composer update etc

Finally, my deploy script needs to restart the queue workers, which are inside the queue workers container. When everything is installed together on a VPS, this is easy: php artisan queue:restart.

But how can I get the deployment container to run that command inside the queue workers container?

Potential solutions

My research indicates basically that containers should not talk to each other, but if you must, I have found four possible solutions:

  1. install SSH in both containers
  2. share docker.sock with the deployment container so it can control other containers via docker
  3. have the queue workers container monitor a directory in the filesystem; when it changes, restart the queue workers
  4. communicate between the containers with a tiny http server in the queue workers container

I really want to avoid 1 and 2, for complexity and security reasons respectively.

I lean toward 3 but am concerned about wasteful resource usage spent monitoring the fs. Is there a really lightweight method of watching a directory with as many files as a Laravel install has?

4 seems slightly crazy but certainly do-able. Are there any really tiny, simple http servers I could install into the queue workers container that can trigger a single command when the deployment container hits an endpoint?

I’m hoping for other suggestions, or if there really is no better way than 3 or 4 above, any suggestions on how to implement either of those options.

2

Answers


  1. Chosen as BEST ANSWER

    I believe @David Maze's answer would be the recommended way, but I decided to post what I ended up doing in case it helps anyone.

    I took a different approach because I am running my CI script inside my containers instead of using a Docker registry & having the CI script rebuild images.

    I could still have given the deploy container access to the docker.sock (option #2) thereby allowing my CI script to control docker (eg rebuild containers etc) but I wasn't keen on the security implications of that, so I ended up doing #3, with a simple inotifywait watching for a change in a special 'timestamp.txt' file I am modifying in my CI script. Because it's monitoring only a single file it's light on the CPU and is working well.

    # Start watching the special directory so we know when to restart the workers.
    SITE_DIR=/var/www/projectname/public_html
    WATCH_DIR=/var/www/projectname/updated_at
    
    while true
    do
        inotifywait -e create -e modify $WATCH_DIR
    
        if [ $? -eq 0 ]
        then
            echo "Detected Site Code Change. Executing artisan queue:restart."
            sudo -H -u www-data php $SITE_DIR/artisan queue:restart
        fi
    done
    

    All the deploy script has to do to trigger a queue:restart is:

    date > $WATCH_DIR/timestamp.txt
    

  2. Delete the existing containers and create new ones.

    A container is fundamentally a wrapper around a single process, so this is similar to stopping the workers with Ctrl+C or kill(1), and then starting them up again. For background workers this shouldn’t interrupt more than their current tasks, and Docker gives them an opportunity to finish what they’re working on before they get killed.

    Since the code in the Docker image is fixed, when your CI system produces a new image, you need to delete and recreate your containers anyways to run them with the new image. In your design, the "deployment" container needs access to the host’s Docker socket (option #2) to be able to do anything Docker-related. I might run the actual build sequence on a different system and push images via a Docker registry, but fundamentally something needs to sudo docker-compose ... on the target system as part of the deployment process.

    A simple Compose-based solution would be to give each image a unique tag, and then pass that as an environment variable:

    version: '3.8'
    services:
      app:
        image: registry.example.com/php-app:${TAG:-latest}
        ...
      worker:
        image: registry.example.com/php-worker:${TAG:-latest}
        ...
    

    Then your deployment just needs to re-run docker-compose up with the new tag

    ssh [email protected] 
      env TAG=20210318 docker-compose up -d
    

    and Compose will take care of recreating the things that have changed.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search