I’m Dockerizing a simple Node/JS (NestJS — but I don’t think that matters for this question) web service and have some questions. This service talks to a Postgres DB. I would like to write a Dockerfile
that can be used to build an image of the service (let’s call it my-service
) and then write a docker-compose.yml
that defines a service for the Postgres DB as well as a service for my-service
that uses it. That way I can build images of my-service
but also have a Docker Compose config for running the service and its DB at the same time together. I think that’s the way to do this (keep me honest though!). Kubernetes is not an option for me, just FYI.
The web service has a top-level directory structure like so:
my-service/
.env
package.json
package-lock.json
src/
<lots of other stuff>
Its critical to note that in its present, non-containerized form, you have to set several environment variables ahead of time, including the Postgres DB connection info (host, port, database name, username, password, etc.). The application code fetches the values of these env vars at runtime and uses them to connect to Postgres.
So, I need a way to write a Dockerfile
and docker-compose.yml
such that:
- if I’m just running a container of the
my-service
image by itself, and want to tell it to connect to any arbitrary Postgres DB, I can pass those env vars in as (ideally) runtime arguments on the Docker CLI command (however remember the app expects them to be set as env vars); and - if I’m spinning up the
my-service
and its Postgres together via the Docker Compose file, I need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container’s run arguments, and then the container needs to set them as env vars for web service to use
Again, I think this is the correct way to go, but keep me honest!
So my best attempt — a total WIP so far — looks like this:
Dockerfile
FROM node:18
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
# creates "dist" to run out of
RUN npm run build
# ideally the env vars are already set at this point via
## docker CLI arguments, so nothing to pass in here (???)
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: ??? anyway to say "build whats in the repo?"
environment:
??? do I need to set anything here so it gets passed to the my-service
container as env vars?
volumes:
pgdata:
Can anyone help nudge me over the finish line here? Thanks in advance!
3
Answers
Yes, you should pass the variables there. This is a principle of 12 factor design
If you don’t put them directly in the YAML, will this option work for you?
Ideally, you also put
So that when you start your service, the database will also start up.
If you want to connect to a different database instance, then you can either create a separate compose file without that database, or use a different set of variables (written out, or using
env_file
, as mentioned)Or you can use NPM
dotenv
orconfig
packages and set different.env
files for different database environments, based on other variables, such asNODE_ENV
, at runtime.Use
build
instead ofimage
directive.You could use Minikube instead of Compose… Doesn’t really matter, but
kompose
exists to convert a Docker Compose into k8s resources.Your
Dockerfile
is correct. You can specify the environment variables while doingdocker run
like this:Or you can specify the environment variables with the help of
.env
file. Let’s call itapp.env
. Its content would be:Now instead of specifying multiple
-e
options todocker run
command, you can simply tell the name of the file from where the environment variables need to be picked up.In order to run postgres and your service with a single docker compose command, a few modifications need to be done in your
docker-compose.yml
. Let’s first see the fullYAML
.Now you can use
docker compose up
command to run the services. If you wish to build themy-service
container each time you can pass an optional argument--build
like this:docker compose up --build
.In order to pass the environment variables from the CLI, there’s only one way which is by the use of
.env
file. In your case ofdocker-compose.yml
theapp.env
would look like:Passing this
app.env
file using docker compose CLI command would look like this:PS: If you’re building your
my-service
each time just for the code changes to reflect in the docker container, you could make use of bind mount instead. The updateddocker-compose.yml
in that case would look like this:This way, you don’t need to run docker compose build each time, making a code change in the source folder would get reflected in the docker container.
You just need to add path of your docker file in to build parameter in docker-compose.yaml file and all the environment variables in environment
I am guessing that you have folder structure like this
and your .env contains following
So in your case your docker-compose file should look like this
FYI: for this docker configuration your database connection host should be postgres (as per service name) not localhost and your