skip to Main Content

I need to run the same python code, but with different initiation arguments with docker.
So under the main directory I’ve setup a folder called docker that contains different folders, each having same docker file but with the different arguments setup. Below is are examples of test_1 and test_2, where test_x is changed between the different folders, as well as test_1 becomes test_2 and so on:

Dockerfile found under docker/test_1 folder

FROM python:3.7
RUN mkdir /app/test_1
WORKDIR /app/test_1
COPY ./env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ../ .
CMD ["python", "main.py","-t","test_1"]

Dockerfile found under docker/test_2 folder

FROM python:3.7
RUN mkdir /app/test_2
WORKDIR /app/test_2
COPY ./env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ../ .
CMD ["python", "main.py","-t","test_2"]

Under the main directory I’ve setup a docker compose file that initiates the different containers (all running the same code) and that share a txt file in shared_folder:

services:
  test_1:
    container_name: test_1
    build: ./docker/test_1
    volumes:
      - output:/app/shared_folder
    restart: unless-stopped

  test_2:
    container_name: test_2
    build: ./docker/test_2
    volumes:
      - output:/app/shared_folder
    restart: unless-stopped

So my question here with docker, is this the right way to go about it, when setting up multiple py executions of the same code with different parameter? or is there another recommended approach. Do want to mention, they need to share the file in shared_folder, that’s a requirement and all the instances have read/write access to the same file in the shared_folder (this is a must have).

3

Answers


  1. First delete CMD ["python", "main.py","-t","test_2"] in Dockerfile and instead add entrypoint in docker-compose.yaml would be a better way to build the image for the code are all the same. if you have more container to start, it will save you a lot of time.

    About the question you asked if the shared_folder you want to share is a read-only file, it is OK, if not, for instance, log files you want to put in it from instance out to the host, you should be careful about the log file name, should not be the same in the two containers.

    Login or Signup to reply.
  2. I would definitely DRY it, use a single Dockefile and use an ARG to build them.

    Here is what you could do:

    In docker/Dockerfile:

    FROM python:3.7
    
    ARG FOLDER
    
    ## We need to duplicate the value of the ARG in an ENV
    ## because the arguments are only visible through the build
    ## so, it won't be accessible to our command
    ENV FOLDER=$FOLDER
    
    RUN mkdir -p /app/$FOLDER
    WORKDIR /app/$FOLDER
    COPY ./$FOLDER/env/requirements.txt requirements.txt
    RUN pip install -r requirements.txt
    COPY . .
    
    CMD ["sh", "-c", "python main.py -t $FOLDER"]
    

    And in your docker-compose.yml define those build arguments:

    version: "3.9"
    services:
      test1:
        container_name: test_1
        build:
          context: ./docker
          args:
            FOLDER: test1
        volumes:
          - output:/app/shared_folder
        restart: unless-stopped
    
      test2:
        container_name: test_2
        build:
          context: ./docker
          args:
            FOLDER: test2
        volumes:
          - output:/app/shared_folder
        restart: unless-stopped
    
    Login or Signup to reply.
  3. It is very easy to override the Dockerfile CMD with a docker run command-line argument or Compose command:. So, I would build only one image, and I would give it a useful default CMD.

    FROM python:3.7
    WORKDIR /app
    COPY ./env/requirements.txt ./
    RUN pip install -r requirements.txt
    COPY ./ ./
    CMD ["./main.py"]
    

    (Make sure your script is executable – maybe run chmod +x main.py on the host – and begins with a "shebang" line #!/usr/bin/env python3, so you don’t have to explicitly name the interpreter.)

    Now in your docker-compose.yml file, have both services build: the same image. You’ll technically get two images out in the docker images output but they will have the same image ID and the second image build will run extremely quickly (it will come entirely from the layer cache). Use Compose command: to override the entire CMD as required.

    version: '3.8'
    services:
      test_1:
        build: .
        command: ./main.py -t test_1
        volumes:
          - output:/app/shared_folder
        restart: unless-stopped
    
      test_2:
        build: .
        command: ./main.py -t test_2
        volumes:
          - output:/app/shared_folder
        restart: unless-stopped
    

    You could also manually run this outside of Compose if you just wanted to validate things, with the same approach

    docker build -t myapp .
    docker run --rm myapp 
      ./main.py --help
    

    With this approach you do not need to rebuild the image for each different command you want to run or wrangle with the syntactic complexities of docker run --entrypoint.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search