skip to Main Content

I have recently started using Docker+Celery. I have also shared the full sample codes for this example on github and the following are some code snippets from it to help explain my point.

For context, my example is designed to be a node that subscribes to events in a system of microservices. In this node, it comprises of the following services:

  1. the Subscriber (using kombu to subscribe to events)
  2. the Worker (using celery for async task acting on the events)
  3. Redis (as message broker and result backend for celery)

The services are defined in a docker-compose.yml file as follows:

version: "3.7"
services:
    # To launch the Subscriber (using Kombu incl. in Celery)
    subscriber:
        build: .
        tty: true
        #entrypoint: ...

    # To launch Worker (Celery)
    worker:
        build: .
        entrypoint: celery worker -A worker.celery_app --loglevel=info
        depends_on: 
            - redis

    redis: 
        image: redis
        ports: 
            - 6379:6379
        entrypoint: redis-server

For simplicity, I have left out codes for the subscriber and I thought using the python interactive shell in the subscriber container for this example should suffice:

python3
>>> from worker import add
>>> add.delay(2,3).get()
5

And in the worker container logs:

worker_1      | [2020-09-17 10:12:34,907: INFO/ForkPoolWorker-2] worker.add[573cff6c-f989-4d06-b652-96ae58d0a45a]: Adding 2 + 3, res: 5
worker_1      | [2020-09-17 10:12:34,919: INFO/ForkPoolWorker-2] Task worker.add[573cff6c-f989-4d06-b652-96ae58d0a45a] succeeded in 0.011764664999645902s: 5

While everything seems to be working, I felt uneasy. I thought this example doesn’t respect the isolation principle of a docker container.

Aren’t containers designed to be isolated to the level of it’s OS,
processes and network? And if containers have to communicate, shouldn’t it be done via IP address and network protocols (TCP/UDP etc.)

Firstly, the worker and subscriber run the same codebase in my example, thus no issue is expected on the import statement.

However, the celery worker is launched from the entrypoint in the worker container, thus, how did the subscriber manage to call the celery worker instance in the supposedly isolated worker container?

To further verify that it is in fact calling the celery worker instance from the worker container, I stopped the worker container and repeated the python interactive shell example in the subscriber container. The request waited (which is expected of celery) and returned the same result as soon as the worker container is turned back on again. So IMO, yes, service from one container is calling an app instance from another container WITHOUT networking like in the case of connecting to redis (using IP address etc.).

Pls advise if my understanding is incorrect or there may be a wrong implementation somewhere which I am not aware of.

2

Answers


  1. Both consumer (worker) and producer (subsriber) are configured to use Redis (redis) both as broker and result backend. That is why it all worked. – When you executed add.delay(2,3).get() in the subscriber container it sent the task to Redis, and it got picked by the Celery worker running in a different container.
    Keep in mind that Python process running the add.delay(2,3).get() code is running in the subscriber container, while the ForkPoolWorker-2 process that executed the add() function and stored the result in the result backend is running in the worker container. These processes are completely independent.

    The subscriber process did not call anything in the worker container! – In plain English what it did was: "here (in Redis) is what I need done, please workers do it and let me know you are done so that I can fetch the result".

    Login or Signup to reply.
  2. Docker-compose creates a default docker network for containers created in a single file. Since you are pointing everything appropriately, it is making the requests along that network, which is why that is succeeding. I would be surprised to hear that this still worked if you were to, for example, run each container separately in parallel without using docker-compose.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search