skip to Main Content

I have docker compose for my application. celery is one of the service.

  • Command celery worker is working but
  • Command celery multi is not working.
  celery:
    container_name: celery_application
    build: 
      context: .
      dockerfile: deploy/Dockerfile
    # restart: always
    networks:
      - internal_network
    env_file:
      - deploy/.common.env
    # command: ["celery", "--app=tasks", "worker", "--queues=upload_queue", "--pool=prefork", "--hostname=celery_worker_upload_queue", "--concurrency=1", "--loglevel=INFO", "--statedb=/external/celery/worker.state"]  # This is working
     command: ["celery", "-A", "tasks", "multi", "start", "default", "model", "upload", "--pidfile=/external/celery/%n.pid", "--logfile=/external/celery/%n%I.log", "--loglevel=INFO", "--concurrency=1", "-Q:default", "default_queue", "-Q:model", "model_queue", "-Q:upload", "upload_queue"]  # This is not working
    # tty: true
    # stdin_open: true
    depends_on:
      - redis
      - db
      - pgadmin
      - web
    volumes:      
      - my_volume:/external

Getting this output

celery | celery multi v5.2.7 (dawn-chorus)
celery | > Starting nodes...
celery |     > default@be788ec5974d: 
celery | OK
celery |     > model@be788ec5974d:
celery | OK
celery |     > upload@be788ec5974d:
celery exited with code 0

All services gets up except celery which exited with code 0.
What I am missing when using celery multi?
Please suggest.

2

Answers


  1. The celery multi command does not wait for celery worker to done but it start multiple celery workers in background and then exit. Unfortunately, the termination of foreground process causes child workers to be terminated too in docker container environment.

    It’s not a good practice to use celery multi with docker like this because any issue of a single worker may not be reflect to container console and your worker may be crashed or dead or go into forever loop inside the container without any signal for management or monitoring. With the single worker command, the exit code will be returned to docker container and it will restart the service in case of termination of the worker.

    If you still really need to use the celery multi like this. You can try to use bash to append another forever loop command to prevent the container from exit:

    command: ["bash", "-c", "celery -A tasks multi start default model upload --pidfile=/external/celery/%n.pid --logfile=/external/celery/%n%I.log --loglevel=INFO --concurrency=1 -Q:default default_queue -Q:model model_queue -Q:upload upload_queue; tail -f /dev/null"]
    

    The tail -f /dev/null will keep your container there forever no matter whether the celery worker is running or not. Of course, your container must have bash installed.

    My assumption is that you would like to containerize all celery workers into the single container for ease to use purpose. If so? You can try https://github.com/just-containers/s6-overlay instead of celery multi. The S6 Overlay can monitor your celery worker, restart it when exited, and also provide some process supervisor utilities like celery multi, but it’s designed for this purpose.

    Login or Signup to reply.
  2. I think what you missing is that docker containers (unlike virtual machines) are meant to run a process and exit. If you run celery using multi you actually run celery as a daemon process – so not the actual process for the container to run.

    One solution can be the one proposed by @truong-hua – it will run new shell (bash) in a new process and then invoke celery multi command. The celery will still exit after running celery daemon, but the shell process will prevail. This seems like a over-complication for me though – what’s the point of running it in the background then?

    Simpler solution would be to run celery attached (as the main process), so simply running e.g. celery -A proj worker --concurrency=<n> ... (I would advise not to set --concurrency option higher than 1 in dockerized environment). Then celery is your main process you can track.

    If it’s still not what you need, here’s a thread on how to attach detached process

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search