skip to Main Content

I’m pretty new to the world of Docker, so I have the following scenario:

  • Spring Boot application which depends to..
  • PostgreSQL

and frontend requesting data from them.

The Dockerfile in the Spring Boot app is:

EXPOSE 8080
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

And the content of the docker-compose.yaml is:

version: '3'

services:
  app:
    image: <user>/<repo>
    build: .
    ports:
      - "8080:8080"
    container_name: app_test
    depends_on:
      - db
    environment:
      - SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/test
      - SPRING_DATASOURCE_USERNAME=test
      - SPRING_DATASOURCE_PASSWORD=test

  db:
    image: 'postgres:13.1-alpine'
    restart: always
    expose:
      - 5432
    ports:
      - "5433:5432"
    container_name: db_test
    environment:
      - POSTGRES_DB=test
      - POSTGRES_USER=test
      - POSTGRES_PASSWORD=test
    volumes:
      - db:/var/lib/postgresql/data
      - ./create-tables.sql:/docker-entrypoint-initdb.d/create-tables.sql
      - ./fill_tables.sql:/docker-entrypoint-initdb.d/fill_tables.sql
volumes:
    db:
      driver: local

As far as I understand in order to run the whole thing is required just to type docker-compose up and voila, it works. It pulls the image for the app from the docker-hub repo and same does for the image for the database.

Here comes the thing. I’m working with another guy (front end), whose goal is to make requests to this API. So is it enough for him to just copy-paste this docker-compose.yaml file and write docker-compose up or there is another thing to be done?

How should docker-compose be used in teamwork?

Thanks in advance, if I have to make it more clear leave a comment!

2

Answers


  1. Because of the build: . keyword in your docker-compose in api service, running docker-compose up will search for the backend Dockerfile and build the image. So, your teammate needs to get all the files you wrote.
    Another solution, which in my point of view is better, would be building the image by you and pushing it to docker.hub, so your teammate can just pull the image from there and run it on his/her system. For this solution, this could be useful.
    In case your not familiar with docker hub, read this quick start

    Login or Signup to reply.
  2. Your colleague will need:

    • The docker-compose.yml file itself
    • Any local files or directories named on the left-hand side of volumes: bind mounts
    • Any directories named in build: (or build: { context: }) lines, if the images aren’t pushed to a registry
    • Any data content contained in a named volume that isn’t automatically recreated

    If they have the docker-compose.yml file they can docker-compose pull the images named there, and Docker won’t try to rebuild them.

    Named volumes are difficult to transfer between systems; see the Docker documentation on saving and restoring volumes. Bind-mounted host directories are easier to transfer, but are much slower on non-Linux hosts. Avoid using volumes for parts of your application code, including the Node library directory or static assets.

    For this setup in particular, the one change I might consider making is using the postgres image‘s environment variables to create the database, and then use your application’s database migration system to create tables and seed data. This would avoid needing the two .sql files. Beyond that, the only thing they need is the docker-compose.yml file.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search