skip to Main Content

I have a project that has two microservices that need to communicate to each other through REST. For each microservice, I want to have a container in my Docker. Each one of the containers will run a Python program (microservice).

As it can be seen beneath, we have an example of how the files are organized. Suppose that file_1.py is one microservice and file_2.py is another. For each one of them, we have to start a container.

app
 |_ uwsgi.ini
 |_ file_1.py
 |_ file_2.py
  |_ folder_2
    |_ file_3
    |_ file_4

To start a container, we are using the file uwgi.ini, as it can be seen below:

[uwsgi]
module = file_1
callable = app
master = true
touch-reload = /app/uwsgi.ini

To start the container, we run the following command:

sudo docker run -d -v /app:/app -p docker_port:80 --name "container-name" "docker_name"

However, as it can be seen, in this way, we can only create a container for the first microservice file_1.py, because inside the root app, i can only have one uwsgi.ini in the folder. And using the command above, it will search for the uwsgi.ini file.

What i want to do is to have a uwsgi.ini for each container. I’ve seen that people to do it, rename uwsgi.ini to each container; for my case, it could be: uwsgi_file_1.ini and uwsgi_file_2.ini. But when I do that, the command to start the container doesn’t find the uswsgi file, because I’ve changed it. Is there a way to, by typing and running the command shown before, make it search for the right uwsgi.ini file?

2

Answers


  1. Two easy options:

    1. Make different Dockerfiles for each container and in them COPY the correct .ini file for each container to the name uwsgi.ini
    2. Use one Dockerfile but make an entrypoint script which takes as parameter the name of the ini file name, then symlinks it before starting the actual uwsgi process.

    There are probably a dozen other ways to solve this as well..

    Login or Signup to reply.
  2. You should create a separate Docker image for each service. I’d suggest putting each in its own directory, and maybe even its own repository:

    app
    +-- service1
    | +-- Dockerfile
    | +-- uwsgi.ini
    | +-- setup.cfg
    | -- app1.py
    |
    +-- service2
    | +-- Dockerfile
    | +-- uwsgi.ini
    | +-- setup.cfg
    | -- app2.py
    |
    -- docker-compose.yml
    

    Each of the Dockerfiles can be fairly self-contained. I put a docker-compose.yml file at the top level in the example, as a straightforward way to launch the services together:

    version: '3.8'
    services:
      service1:
        build: ./service1
        ports: ['8001:80']
      service2:
        build: ./service2
        ports: ['8002:80']
        environment:
          - SERVICE_1_URL=http://service1
    

    Your proposed docker run command has a -v option that overwrites the image’s /app directory with content from the host. Both your suggested solution and @vaizki’s answer then involve making some sort of Docker- or container-specific change to the file layout; the bind mount would either hide the change or cause a conflict if if you’re running multiple copies of the container. I’d recommend deleting this option (or the equivalent Compose volumes:); use a Python virtual environment without Docker for ordinary development.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search