skip to Main Content

I’m finding strange that my celery worker logs only ForkPoolWorker-31, like it is processing only with one processor.

Even running top it shows that only one processor is very busy and the others not that much.

I run celery with
celery -A my_service.celery_tasks:celery_app worker --loglevel=INFO -n ${CELERY_INSTANCE} -E

[2020-11-07 00:16:32,677: INFO/MainProcess] celery@grid12 ready.
[2020-11-07 00:16:36,416: WARNING/ForkPoolWorker-31] 19889
[2020-11-07 00:16:36,427: WARNING/ForkPoolWorker-31] 19934
[2020-11-07 00:16:36,427: WARNING/ForkPoolWorker-31] 19882
[2020-11-07 00:16:36,432: WARNING/ForkPoolWorker-31] 20282
[2020-11-07 00:16:36,441: WARNING/ForkPoolWorker-31] 20031
[2020-11-07 00:16:36,446: WARNING/ForkPoolWorker-31] 19884
[2020-11-07 00:16:36,452: WARNING/ForkPoolWorker-31] 20124
[2020-11-07 00:16:36,456: WARNING/ForkPoolWorker-31] 20030
[2020-11-07 00:17:53,313: WARNING/ForkPoolWorker-31] 19897
[2020-11-07 00:17:53,446: INFO/ForkPoolWorker-31] POST Some logs... [status:200 request:11.930s]
[2020-11-07 00:17:54,099: INFO/ForkPoolWorker-31] Some logs...
[2020-11-07 00:17:55,771: INFO/ForkPoolWorker-31] POST Some logs... [status:200 request:15.501s]
[2020-11-07 00:17:56,307: INFO/ForkPoolWorker-31] 
 -------------- celery@XXXXX v5.0.1 (singularity)
--- ***** ----- 
-- ******* ---- Linux-4.14.13-1.el7.elrepo.x86_64-x86_64-with-glibc2.10 2020-11-07 00:22:33
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         my_service.celery_tasks:0x7fffed3beaf0
- ** ---------- .> transport:   redis://:**@grid12:6385/0
- ** ---------- .> results:     redis://:**@grid12:6385/0
- *** --- * --- .> concurrency: 48 (prefork)
-- ******* ---- .> task events: ON
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . myTask

[2020-11-07 00:22:34,002: INFO/MainProcess] Connected to redis://:**@grid12:6385/0
[2020-11-07 00:22:34,041: INFO/MainProcess] mingle: searching for neighbors
[2020-11-07 00:22:35,942: INFO/MainProcess] mingle: sync with 30 nodes
[2020-11-07 00:22:36,164: INFO/MainProcess] mingle: sync complete
[2020-11-07 00:22:37,733: INFO/MainProcess] pidbox: Connected to redis://:**@grid12:6385/0.

The machine has 48 cores and the avg usage is <2%.

There are plenty of pending tasks. Any suggestion?

2

Answers


  1. Recently I faced the same problem and was able to solve it by adding flag -O fair to run Celery command.

    My whole command is as follows:

    # "-O fair" is a key component for simultaneous task execution by prefork workers
    # celery app is module name in my program with Celery instance inside
    # cel_app_worker is the name of Celery worker 
    # -P prefork - is not necessary since it is default value, but I decided to keep it
    celery -A celery_app worker --loglevel=INFO --concurrency=8 -O fair -P prefork -n cel_app_worker
    

    Please try it and let me know if it worked for you.

    I use Celery app in Docker, Dockerfile:

    FROM python:3.7-alpine
    
    WORKDIR /usr/src/app
    
    RUN apk add --no-cache tzdata
    
    ENV TZ=Europe/Moscow
    
    RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
    
    COPY requirements.txt ./
    RUN pip install --no-cache-dir -r requirements.txt
    
    COPY . .
    
    # Create a group and user
    RUN addgroup -S appgroup && adduser -S celery_user -G appgroup
    
    # Tell docker that all future commands should run as the appuser user
    USER celery_user
    
    # !! "-O fair" is a key component for simultaneous task execution by on worker !!
    CMD celery -A celery_app worker --loglevel=INFO --concurrency=8 -O fair -P prefork -n cel_app_worker
    
    Login or Signup to reply.
  2. I recently encountered this error. Found out, that a celery task was getting stuck in a (recursive)infinite (indefinite) for loop. Had to terminate the celery worker and fix the infinite (indefinite) ‘for’ loop issue and start the worker. The error disappeared.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search