I would like my docker-compose.yml file to use the ".env" file in the same directory as the "docker-compose.yml" file to set some envrionment variables and for those to take precedence for any other env vars set in the shell. Right now I have
$ echo $DB_USER
tommyboy
and in my .env file I have
$ cat .env
DB_NAME=directory_data
DB_USER=myuser
DB_PASS=mypass
DB_SERVICE=postgres
DB_PORT=5432
I have this in my docker-compose.yml file …
version: '3'
services:
postgres:
image: postgres:10.5
ports:
- 5105:5432
environment:
POSTGRES_DB: directory_data
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: password
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
DB_SERVICE: host.docker.internal
DB_NAME: directory_data
DB_USER: ${DB_USER}
DB_PASS: password
DB_PORT: 5432
command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- postgres
In my Python 3/Django 3 project, I have this in my application’s settings.py file
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ['DB_NAME'],
'USER': os.environ['DB_USER'],
'PASSWORD': os.environ['DB_PASS'],
'HOST': os.environ['DB_SERVICE'],
'PORT': os.environ['DB_PORT']
}
}
However when I run my project, using "docker-compose up", I see
maps-web-1 | File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
maps-web-1 | connection = Database.connect(**conn_params)
maps-web-1 | File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
maps-web-1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
maps-web-1 | psycopg2.OperationalError: FATAL: role "tommyboy" does not exist
It seems like the Django container is using the shell’s env var instead of what is passed in and I was wondering if there’s a way to have the Python/Django container use the ".env" file at the root for it’s env vars.
3
Answers
I thought at first I had misread your question, but I think my original comment was correct. As I mentioned earlier, it is common for your local shell environment to override things in a
.env
file; this allows you to override settings on the command line. In other words, if you have in your.env
file:And you want to override the value of
DB_USER
for a singledocker-compose up
invocation, you can run:That’s why values in your local environment take precedence.
When using
docker-compose
with things that store persistent data — like Postgres! — you will occasionally see what seems to be weird behavior when working with environment variables that are used to configure the container. Consider this sequence of events:We run
docker-compose up
for the first time, using the values in your.env
file.We confirm that we can connect to the database us the
myuser
user:We stop the container by typing
CTRL-C
.We start the container with a new value for
DB_USER
in ourenvironment variable:
We try connecting using the
tommyboy
username……and it fails.
What’s going on here?
The
POSTGRES_*
environment variables you use to configure thePostgres are only relevant if the database hasn’t already been
initialized. When you stop and restart a service with
docker-compose
, it doesn’t create a new container; it just restartsthe existing one.
That means that in the above sequence of events, the database was
originally created with the
myuser
username, and starting it thesecond time when setting
DB_USER
in our environment didn’t changeanything.
The solution here is use the
docker-compose down
command, whichdeletes the containers…
And then create a new one with the updated environment variable:
Now we can access the database as expected:
I cannot provide a better answer than the excellent one provided by @larsks but please, let me try giving you some ideas.
As @larsks also pointed out, any shell environment variable will take precedence over those defined in your docker-compose
.env
file.This fact is stated as well in the docker-compose documentation when taking about environment variables, emphasis mine:
This mean that, for example, providing a shell variable like this:
will definitively overwrite any variable you could have defined in your
.env
file.One possible solution to the problem is trying using the
.env
file directly, instead of the environment variables.Searching for information about your problem I came across this great article.
Among other things, in addition to explaining your problem too, it mentions as a note at the end of the post an alternative approach based on the use of the
django-environ
package.I was unaware of the library, but it seems it provides an alternative way for configuring your application reading your configuration directly from a configuration file:
If required, it seems you could mix the variables defined in the environment as well.
Probably python-dotenv would allow you to follow a similar approach.
Of course, it is worth mentioning that if you decide to use this approach you need to make accesible the
.env
file to your docker-compose web service and associated container, perhaps mounting and additional volume or copying the.env
file to theweb
directory you already mounted as volume.You still need to cope with the PostgreSQL container configuration, but in a certain way it could help you achieve the objective you pointed out in your comment because you could use the same
.env
file (certainly, a duplicated one).According to your comment as well, another possible solution could be using Docker secrets.
In a similar way as secrets works in Kubernetes, for example, as explained in the official documentation:
In a nutshell, it provides a convenient way for storing sensitive data across Docker Swarm services.
It is important to understand that Docker secrets is only available when using Docker Swarm mode.
Docker Swarm is an orchestrator service offered by Docker, similar again to Kubernetes, with their differences of course.
Assuming you are running Docker in Swarm mode, you could deploy your compose services in a way similar to the following, based on the official docker-compose docker secrets example:
Please, note the following.
We are defining a secret named
db_user
in asecrets
section.This secret could be based on a file or computed from standard in, for example:
The secret should be exposed to every container in which it is required.
In the case of Postgres, as explained in the section
Docker secrets
in the official Postgres docker image description, you can use Docker secrets to define the value ofPOSTGRES_INITDB_ARGS
,POSTGRES_PASSWORD
,POSTGRES_USER
, andPOSTGRES_DB
: the name of the variable for the secret is the same as the normal ones with the suffix_FILE
.In our use case we defined:
In the case of the Django container, this functionality is not supported out of the box but, due to the fact you can edit your
settings.py
as you need to, as suggested for example in this simple but great article you can use a helper function to read the required value in yoursettings.py
file, something like:Probably this would make more sense to store the database password, but it could be a valid solution for the database user as well.
Please, consider review this excellent article too.
Based on the fact that the problem seems to be caused by the change in your environment variables in the Django container one last thing you could try is the following.
The only requirement for your
settings.py
file is to declare different global variables with your configuration. But it didn’t say nothing about how to read them: in fact, I exposed different approaches in the answer, and, after all, is Python and you can use the language to fill your needs.In addition, it is important to understand that, unless in your Dockerfile you change any variables, when both the Postgres and Django containers are created the will receive exactly the same
.env
file with exactly the same configuration.With these two things in mind you could try creating a Django container local copy of the provided environment in your
settings-py
file and use it between restarts or between whatever reason is causing the variables to change.In your
settings.py
(please, forgive me for the simplicity of the code, I hope you get the idea):I think any of the aforementioned approches is better, but certainly it will ensure environment variables consistency accross changes in the environment and container restarts.
Values in the shell take precedence over those specified in the .env file.
If you set TAG to a different value in your shell, the substitution in image uses that instead:
Please refer link for more details: https://docs.docker.com/compose/environment-variables/