skip to Main Content

Suppose I wrote a docker-compose.dev.yml file to set the development environment of a Flask project (web application) using Docker. In docker-compose.dev.yml I have set up two services, one for the database and one to run the Flask application in debug mode (which allows me to do hot changes without having to recreate/restart the containers). This allows everyone on the development team to use the same development environment very easily. However, there is a problem: it is evident that while developing an application it is necessary to install libraries, as well as to list them in the requirements.txt file (in the case of Python). For this I only see two alternatives using a Docker development environment:

  1. Enter the console of the container where the Flask application is running and use the pip install ... and pip freeze > requirements.txt commands.
  2. Manually write the dependencies to the requirements.txt file and rebuild the containers.

The first option is a bit laborious, while the second is a bit "dirty". Is there any more suitable option than the two mentioned alternatives?

Edit: I don’t know if I’m asking something that doesn’t make sense, but I’d appreciate if someone could give me some guidance on what I’m trying to accomplish.

4

Answers


  1. The second option is generally used in python environments. You just add new packages to requirements.txt and restart the container, which has a line with pip install -r requirements.txt in its dockerfile that do the installing.

    Login or Signup to reply.
  2. If the goal is to have a consistent dev environment, the safest way I can think of would be to build a base image with the updated dependencies and publish to a private registry so that you can refer to a specific tag like app:v1.2. So the Dockerfile can look like:

    FROM AppBase:v1.2
    ...
    

    This means that there is no need to install the dependencies and results in a quicker and consistent dev env setup.

    Login or Signup to reply.
  3. Install requirements in a virtualenv inside the container in an externally mounted volume. Note that the virtualenv creation and installation should happen in container run time, NOT in image building time (because there is no mounted volume).

    Assuming you are already mounting (not copying!) your project sources, you can keep it in a ./.venv folder, which is a rather standard procedure.

    Then you work just as you would locally: issue the install once when setting up the project for the first time, requirements need not be reinstalled unless requirements change, you can keep the venv even if the container is rebuilt, restarting the app does not reinstall the requirements every time, etc, etc.

    Just don’t exepect the virtualenv to be usable outside the container, e.g. by your IDE (but a bit of hacking with the site module would let you share the site-packages with a virtualenv for your machine)


    This is a very different approach to how requirements are usually managed in production docker images, where sources and requirements are copied and installed in image building time. So you’ll probably need two very different Dockerfiles for production deployment and for local development, just as you already have different docker-compose.yml files.

    But, if you wanted them both to be more similar, remember there is no harm on also using a virtualenv inside the production docker image, despite the trend of not doing so.

    Login or Signup to reply.
  4. For something like this I use multi-layer docker images.

    Disclaimer: The below examples are not tested. Please consider it as a mere description written in pseudo code 😉

    As a very simple example, this approach could look like this:

    # Make sure all layers are based on the same python version.
    FROM python:3.10-slim-buster as base
    
    # The actual dev/test image.
    # This is where you can install additional dev/test requirements.
    FROM base as test
    COPY ./requirements_test.txt /code/requirements_test.txt
    RUN python -m pip install --no-cache-dir --upgrade -r /code/requirements_test.txt
    
    ENTRYPOINT ["python"]
    # Assuming you run tests using pytest.
    CMD ["-m", "pytest", "..."]
    
    # The actual production image.
    FROM base as runtime
    COPY ./requirements.txt /code/requirements.txt
    RUN python -m pip install --no-cache-dir --upgrade -r /code/requirements.txt
    
    ENTRYPOINT ["python"]
    # Assuming you wantto run main.py as a script.
    CMD ["/path/to/main.py"]
    

    With requirements.txt like this (just an example):

    requests
    

    With requirements_test.txt like this (just an example):

    -r requirements.txt
    
    pytest
    

    In your docker-compose.yml file you only need topassthe --target (of the multi-layered Dockerfile, in this example: test and runtime) like this (not complete):

    services:
      service:
        build:
          context: .
          dockerfile: ./Dockerfile
          target: runtime  # or test for running tests
    

    A final thought: As I mentioned in my comment, a much better approach for dealing with such dependency requirements might be using tools like poetry or pip-tools – or whatever else is out there.


    Update 2022-05-23:

    As mentioned in the comment, for the sake of completeness and because this approach might be close to a possible solution (as requested in the question):

    An example for a fire-and-forget approach could look like this – assuming the container has a specific name (<containe_name>):

    # This requires to mount the file 'requirements_dev.txt' into the container - as a volume.
    docker exec -it <container_name> python -m pip install --upgrade -r requirements_dev.txt
    

    This command simply installs new dependencies into the running container.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search