skip to Main Content

I’m building a flask application with celery for backgrounding tasks. My app is using localstack running in a docker container to mimic SQS locally for my message broker. I’ve gotten flask and celery running locally to work correctly with localstack where I can see flask receiving a request, adding a message to the SQS queue and then celery picks up that task and executes it.

I’ve attempted to dockerize flask and celery along with localstack and all my services run as expected except for the celery worker which doesn’t execute tasks from the queue. I can start a celery worker locally that will read the queue and execute tasks, but the docker celery worker won’t pull any tasks.

Running the celery worker in the flask container reached the same results, as well as adding arguments like --without-gossip that I found in this github thread.

Am I missing something in the docker architecture that makes celery not pull from the SQS queue?

Here’s my docker-compose.yml:


services:
  dev:
    build:
      context: .
      dockerfile: 'dev.Dockerfile'
    ports:
    - "5050:5000"
    restart: always
    volumes:
    - .:/app
    environment:
    - GUNICORN_CMD_ARGS="--reload"
    - docker_env=true
    stdin_open: true
    tty: true
    command: ./entrypoint.sh
    depends_on: 
      - localstack

  # mimics AWS services locally
  localstack:
    image: localstack/localstack:latest
    ports:
      - '4561-4599:4561-4599'
      - '8080:8080'
    environment:
      - SERVICES=sqs
      - DEBUG=1
      - DATA_DIR=/tmp/localstack/data
    volumes:
      - './.localstack:/tmp/localstack'
    restart: always

  celery:
    build:
      context: .
      dockerfile: 'dev.Dockerfile'
    volumes:
      - .:/app
    environment: 
      - docker_env=true
    stdin_open: true
    tty: true
    command: ./celeryworker.sh
    restart: always
    links:
      - localstack
    depends_on:
      - localstack


volumes:
  .:

dev.dockerfile:

FROM python:3.6

USER root

# Environment Variables
ENV HOST 0.0.0.0
ENV PORT 5000

# Install Packages
COPY requirements.txt /requirements.txt

RUN /bin/bash -c "python3 -m venv docker 
    && source docker/bin/activate 
    && pip3 install --upgrade pip 
    && pip3 install -r requirements.txt"

# Source Code
WORKDIR /app
COPY . .
COPY app/gunicorn_config.py /deploy/app/gunicorn_config.py
COPY entrypoint.sh /bin/entrypoint.sh
COPY celeryworker.sh /bin/celeryworker.sh


USER nobody

EXPOSE 5000
CMD ["bash"]

entrypoint.sh:

source ../docker/bin/activate
gunicorn -c app/gunicorn_config.py --bind 0.0.0.0:5000 'app:create_app()'

celeryworker.sh

source ../docker/bin/activate
celery worker -A app.celery_worker.celery

celeryconfig.py

import os

BROKER_URL = 'sqs://fake:fake@localhost:4576/0'     # local sqs
# BROKER_URL = 'redis://localhost:6379/0'     # local redis
# RESULT_BACKEND = 'redis://localhost:6379/0'     # local redis
if os.getenv('docker_env', 'local') != 'local':
    BROKER_URL = 'sqs://fake:fake@localstack:4576/0'

BROKER_TRANSPORT_OPTIONS = {'queue_name_prefix': 'app-'}

I use the same commands locally and everything runs as expected. I’ve also recreated my local virtual environment to make sure I don’t have extra packages not in my requirements.txt

2

Answers


  1. Chosen as BEST ANSWER

    Solved this by setting HOSTNAME and HOSTNAME_EXTERNAL in the localstack environment variables. By changing the two between localhost and localstack I can get either my local or docker celery workers pulling tasks.


  2. Related to Celery, but with S3 instead of SQS, depending if requests are being processed on the host, or from within a docker service, the AWS client may need to be instantiated with an endpoint URL explicitly.

    e.g. in Python using boto3:

    # client instantiated from within Celery service running as a Docker service:
    client = boto3.client(
      service_name="s3",
      endpoint_url="http://localstack:4566", # <= localstack
    )
    
    # client instantiated in application, whether running on host or
    # as Docker service
    client = boto3.client(
      service_name="s3",
      endpoint_url="http://localhost:4566", # <= localhost
    )
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search