skip to Main Content

Hej there,

For some days I try to find a good solution for my Laravel app. I am using docker and docker compose to organise my stack and have splitted it into the following services:

  • Nginx for serving the requests
  • PHP-FPM for processing requests to the Laravel app passed from Nginx
  • PHP-FPM for handling the queue of the Laravel app
  • PHP-FPM for handling the schedule of the Laravel app
  • MariaDB as database

The services with PHP-FPM use the same customized docker image that adds the necessary files for the Laravel app.

My problem is that I am not sure how I should provide the files of my Laravel app to the services. I can think of two ways:

  1. Copy the files in the customized Dockerfile. With this approach I can deploy my app via a custom registry and start the queue and the schedule by changing the entrypoint in the docker-compose.yml. Downside is that I do not know how Nginx should access the files inside the container, especially the static assets. Also this makes things harder while in development.
  2. Bind the files as volume into the container. This solves (almost) all downside problems of the first approach but I cannot think about a good solution for starting the queue and the schedule workers (in a Dockerish way).

I would be so thankful for any help and useful advice. After reading a lot of questions here at SO and other blog posts and watching some YouTube videos I am very, very confused.

2

Answers


  1. you can install supervisor in Dockerfile and run necessary services inside it: cron, php-fpm, queue.

    Dockerfile

    FROM php:7.2-fpm-alpine
    RUN set -ex 
      && apk add --update 
      ...
      supervisor
    
    COPY supervisord.conf /etc/supervisord.conf
    
    ADD crontab /etc/cron.d/laravel-cron
    RUN chmod 0644 /etc/cron.d/laravel-cron 
        && crontab /etc/cron.d/laravel-cron
    
    ENTRYPOINT ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisord.conf"]
    

    supervisord.conf

    [supervisord]
    nodaemon=true
    
    [unix_http_server]
    file=/tmp/supervisord.sock
    chmod=0700
    
    [program:cron]
    command=/usr/sbin/crond -f -l 8
    stdout_logfile=/dev/stdout
    stderr_logfile=/dev/stderr
    stdout_logfile_maxbytes=0
    stderr_logfile_maxbytes=0
    autorestart=true
    priority=10
    
    [supervisorctl]
    serverurl=unix:///tmp/supervisord.sock
    
    [rpcinterface:supervisor]
    supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
    
    [program:php-fpm]
    command=php-fpm -F
    autorestart=true
    autorestart=true
    priority=5
    stdout_events_enabled=true
    stderr_events_enabled=true
    
    [include]
    files = /etc/supervisor/conf.d/*.conf
    
    

    crontab

    * * * * * cd /var/www/crm && php artisan schedule:run >> /dev/null 2>&1
    
    Login or Signup to reply.
  2. In a dockerish way what you can do is separate out the services in a different containers so that the queue and cron jobs does not affect your base main application.
    You can take a look at a docker-compose file for reference

    # Dockerfile
    FROM php:8.1-fpm
    
    # Other Dockerfile codes
    
    EXPOSE 9000
    CMD ["php-fpm"]
    
    # End of Dockerfile 
    
    
    # docker-compose.yml
    
    version: '3'
    services:
      
      #PHP Service
      laravel-app:
        image: laravel-app # Build the application image based on Dockerfile present in the root directory 
        build:
          context: .
          dockerfile: Dockerfile
        container_name: laravel-app
        tty: true
        environment:
          SERVICE_NAME: laravel-app
          SERVICE_TAGS: dev
        working_dir: /var/www
        volumes:
           - ./:/var/www
           - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
          - laravel-network
    
      # Queue Worker Service
      laravel-queue-worker:
        image: laravel-app  # Reference the existing image built for laravel-app
        container_name: laravel-queue-worker
        tty: true
        environment:
          SERVICE_NAME: laravel-queue-worker
          SERVICE_TAGS: dev
        working_dir: /var/www
        volumes:
          - ./:/var/www
          - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
          - laravel-network
        command: php artisan queue:work --queue=default --sleep=3 --tries=3 --max-time=3600
        restart: always 
      # Scheduler Service
      laravel-scheduler-worker:
        image: laravel-app  # Reference the existing image built for laravel-app
        container_name: laravel-scheduler-worker
        tty: true
        environment:
          SERVICE_NAME: laravel-scheduler-worker
          SERVICE_TAGS: dev
        working_dir: /var/www
        volumes:
          - ./:/var/www
          - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
          - laravel-network
        command: php artisan schedule:run
        restart: always 
    #Docker Networks
    networks:
      laravel-network:
        name: laravel-network
        driver: bridge
    

    Here, first we create the laravel application image and then use the same image to run the queue worker and task scheduler into a separate container. Your nginx service will be serving only from the laravel-app container.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search