Suppose I wrote a docker-compose.dev.yml
file to set the development environment of a Flask project (web application) using Docker. In docker-compose.dev.yml
I have set up two services, one for the database and one to run the Flask application in debug mode (which allows me to do hot changes without having to recreate/restart the containers). This allows everyone on the development team to use the same development environment very easily. However, there is a problem: it is evident that while developing an application it is necessary to install libraries, as well as to list them in the requirements.txt
file (in the case of Python). For this I only see two alternatives using a Docker development environment:
- Enter the console of the container where the Flask application is running and use the
pip install ...
andpip freeze > requirements.txt
commands. - Manually write the dependencies to the
requirements.txt
file and rebuild the containers.
The first option is a bit laborious, while the second is a bit "dirty". Is there any more suitable option than the two mentioned alternatives?
Edit: I don’t know if I’m asking something that doesn’t make sense, but I’d appreciate if someone could give me some guidance on what I’m trying to accomplish.
4
Answers
The second option is generally used in python environments. You just add new packages to requirements.txt and restart the container, which has a line with
pip install -r requirements.txt
in its dockerfile that do the installing.If the goal is to have a consistent dev environment, the safest way I can think of would be to build a base image with the updated dependencies and publish to a private registry so that you can refer to a specific tag like
app:v1.2
. So the Dockerfile can look like:This means that there is no need to install the dependencies and results in a quicker and consistent dev env setup.
Install requirements in a virtualenv inside the container in an externally mounted volume. Note that the virtualenv creation and installation should happen in container run time, NOT in image building time (because there is no mounted volume).
Assuming you are already mounting (not copying!) your project sources, you can keep it in a
./.venv
folder, which is a rather standard procedure.Then you work just as you would locally: issue the install once when setting up the project for the first time, requirements need not be reinstalled unless requirements change, you can keep the venv even if the container is rebuilt, restarting the app does not reinstall the requirements every time, etc, etc.
Just don’t exepect the virtualenv to be usable outside the container, e.g. by your IDE (but a bit of hacking with the
site
module would let you share the site-packages with a virtualenv for your machine)This is a very different approach to how requirements are usually managed in production docker images, where sources and requirements are copied and installed in image building time. So you’ll probably need two very different Dockerfiles for production deployment and for local development, just as you already have different docker-compose.yml files.
But, if you wanted them both to be more similar, remember there is no harm on also using a virtualenv inside the production docker image, despite the trend of not doing so.
For something like this I use multi-layer docker images.
Disclaimer: The below examples are not tested. Please consider it as a mere description written in pseudo code 😉
As a very simple example, this approach could look like this:
With
requirements.txt
like this (just an example):With
requirements_test.txt
like this (just an example):In your
docker-compose.yml
file you only need topassthe--target
(of the multi-layered Dockerfile, in this example:test
andruntime
) like this (not complete):A final thought: As I mentioned in my comment, a much better approach for dealing with such dependency requirements might be using tools like
poetry
orpip-tools
– or whatever else is out there.Update 2022-05-23:
As mentioned in the comment, for the sake of completeness and because this approach might be close to a possible solution (as requested in the question):
An example for a fire-and-forget approach could look like this – assuming the container has a specific name (
<containe_name>
):This command simply installs new dependencies into the running container.