skip to Main Content

I have a Django (4.2.2) app running with Python (3.10.12), Celery (5.4.0), Celery Beat (2.6.0), Django Celery Results (2.5.1), Redis and Postgres.

Here is my celery configuration:

CELERY_BROKER_URL = "redis://localhost:6379/3"

from __future__ import absolute_import, unicode_literals
from celery import Celery
from django.conf import settings
import os
import django

# Set the DJANGO_SETTINGS_MODULE before calling django.setup()
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'auctopus.settings')
django.setup()

# Initialize Celery
app = Celery('proj')
app.conf.enable_utc = False
app.conf.broker_connection_retry_on_startup = True
app.conf.task_serializer = 'json'
app.conf.accept_content = ['application/json']
app.conf.result_serializer = 'json'
app.conf.timezone = 'Asia/Kolkata'
app.conf.cache_backend = 'default'
app.conf.database_engine_options = {'echo': True}
app.conf.result_backend = 'django-db'
app.conf.result_expires = 3600
app.conf.task_time_limit = 1000
app.conf.task_default_queue ='default'
app.conf.worker_concurrency = 6

app.config_from_object(settings, namespace='CELERY')

# Autodiscover tasks from installed apps
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

app.conf.beat_scheduler = 'django_celery_beat.schedulers:DatabaseScheduler'

After Django application starts, it creates several schedulers automatically such as internet_status_check, system_resource_check, etc. using IntervalSchedule.

At first, it runs smoothly, but as soon as I create any new scheduler be it an IntervalSchedule or CrontabSchedule, previously running schedulers stop execution. I get a message in celery beat terminal also saying DatabaseScheduler: Schedule changed.

After this message it does not send any task for execution even if tasks are available for execution which can be seen from Django admin panel.

I tried changing broker to RabbitMQ and result_backend to Redis/django-db. But same issue persists.
Even if I restart my celery beat, it is not picking any tasks.

Only thing that worked is changing the last_run_at for all the tasks to None and then calling changed() method using django shell at runtime. But this can’t be the solution.

from django_celery_beat.models import PeriodicTask, PeriodicTasks

PeriodicTask.objects.all().update(last_run_at=None)
for task in PeriodicTask.objects.all():
    PeriodicTasks.changed(task)

How do I resolve this issue? Is there any configuration missing from my end or it is a bug in the celery beat scheduler itself?

2

Answers


  1. Chosen as BEST ANSWER

    After looking into the issue for 3 days I found that issue was with time zone settings.

    I had set USE_TZ = False in my settings.py of Django app and if we do not set DJANGO_CELERY_BEAT_TZ_AWARE=False, Celery Beat sets it to True by default.

    So, setting DJANGO_CELERY_BEAT_TZ_AWARE=False in settings.py resolved the issue.


  2. This happened to me the first time using beat, and what makes it more complicated is that it doesn’t provide any errors or logs to check what is missing.

    I suppose you have already configured your rabbitmq and redis first, e.g., setting a host, v_host, user, passwords etc. I don’t see the CELERY_BROKER_URL and CELERY_RESULT_BACKEND, so I suppose they are in your settings.py. In my case, my celery.py file looks like this:

    class Celery(celery.Celery):
      os.environ.setdefault('DJANGO_SETTINGS_MODULE',
                          'project.settings')
    
      app = Celery(settings.CELERY_APP,
                 backend=settings.CELERY_RESULT_BACKEND,
                 broker=settings.CELERY_BROKER_URL)
      app.config_from_object('django.conf:settings', namespace='CELERY')
      app.autodiscover_tasks(settings.INSTALLED_APPS)
    
      app.conf.update(
        CELERYBEAT_SCHEDULE={
        'send_campaign': {
          'task': 'tasks.MyTask',
          'schedule': crontab(minute='*/2'),
          'options': {'queue': 'high'}
        }})
    

    Now, to make sure your beat is running with your celery workers, you can use this instruction in another terminal: celery -A proj worker --loglevel=info -B.

    However, you can try to run it after deleting all your pending tasks using celery -A proj purge or truncating the tables django_celery_beat_periodictask and django_celery_beat_periodictasks.

    Given its configuration could be extensive, working with docker may be useful. Check this article on handling periodic tasks in django with celery and docker, which explains it step by step so you can compare configurations you are missing in your implementation.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search