skip to Main Content

Is it possible to see other containers which are running in the same host from the intermediate containers which would be created by Dockerfile?

I need to connect to my dockerized database, it is already up and running. From my Dockerfile How can I connect to it? is it possible or not?

postgres.docker-compose.yml

this is my postgres.docker-compose.yml which is the first container that I run it:

version: '3.7'

services:
  postgres:
    image: postgres:13
    env_file:
      - .postgres.env
    restart: always
    networks: 
      - take-report

networks:
  take-report:
    name: take-report

A simple container. Please note that I can connect to it outside of Dockerfile but I want to connect to this dockerized postgres from This Dockerfile:

Dockerfile

# Note: I had issues with npm ci, therefore I did it once in a temp Dockerfile and create a new base image with the installed 3rd party packages and I put this name for it: take-report:dep
FROM take-report:dep as build_stage
ARG DATABASE_URL
ENV DATABASE_URL=$DATABASE_URL
# README: Because WORKDIR is set in the take-report:dep I ignore to use it here again
# WORKDIR /app
COPY prisma ./prisma/
COPY . .
RUN npx prisma generate
RUN npm run build
# Cannot connect to Database from here even due the Database_URL is correct.
RUN echo $DATABASE_URL
# error message: Error: P1001: Can't reach database server at `postgres`:`5432`
RUN npm run prisma:dev
RUN npm prune --production

FROM node:16.14.0-alpine3.15

WORKDIR /app

COPY --from=build_stage /app/node_modules ./node_modules
COPY --from=build_stage /app/package*.json ./
COPY --from=build_stage /app/tsconfig*.json ./
COPY --from=build_stage /app/dist ./dist
COPY --from=build_stage /app/dist/prisma ./prisma
COPY --from=build_stage /app/prisma/schema.prisma ./prisma/schema.prisma
# COPY --from=build_stage /app/prisma ./prisma

EXPOSE $APP_PORT

docker-compose.yml

version: '3.7'

services:
  take-report:
    image: take-report:v1
    restart: unless-stopped
    build: 
      context: .
      dockerfile: Dockerfile
      args:
        - DATABASE_URL
    ports: 
      - ${APP_EXPOSED_PORT}:$APP_PORT
    env_file:
      - .env
    networks:
      - take-report
      - traefik_default
    labels:
      - "traefik.enable=true"
    command: npm run start:prod

networks:
  traefik_default:
    external: true
  take-report: 
    external: true

As you can see I put the take-report and postgres containers in the same network. In this way they can see each other. Note that the created container by docker-compose.yml file can see and connect to the DATABASE_URL. So I guess all I need is to specify the intermediate containers’ network that docker creates to build my custom image. In other word I want to some how tell docker to use which external network while building this custom image with written Dockerfile.

Is that possible?
In case that something was not clear please tell me to clarify it
Thanks regardless.

Edit #1 – Add more info:

I have to say that when I issued docker-compose -f postgres.docker-compose.yml up it will creates the take-report network for me and I can connect to it in the docker-compose.yml.
The second info: docker-compose.yml can see the postgres because they’re in the same network, I meant take-report network.

3

Answers


  1. In your first docker-compose you have the networks setup not as a external network. In the second docker-compose the network with the same name is setup as an external network.
    I don’t know if that works. But you might make the first one also external.

    You could inspect the network to see of all containers (postgres and take-report) are in the same network.

    docker network inspect take-report
    

    If you want to access postgres service from take-report service (which is in another docker-compose), you cannot use the service-name but have to go through the IP-address (at least, that’s how I do it).

    So, specify IP addresses in the Services. They should appear on the docker network.

    Then you can try to access this IP address from intermediate containers. (Also, this I have not tested, but I can’t see why this would not work)

    Login or Signup to reply.
  2. To elaborate on the comments in the previous answer:
    This is our Postgres docker-compose.yml:

    version: '3.9'
    
    services:
    
      db:
        image: postgres:latest
        restart: "no"
        container_name: MyDb
        volumes:
          - ./database:/var/lib/postgresql/data
        ports:
          - "8002:5432"
        environment:
          POSTGRES_PASSWORD: root
          POSTGRES_USER: root
          POSTGRES_DB: MyDb
        networks:
          my-net:
            ipv4_address: 172.30.0.12
    
    networks:
      my-net:
        external: true
        name: my-net
    

    The Host has a number of subdomains pointing to it and a Nginx proxyserver (also running in a container) is forwarding Db requests to port 5432 on ip 172.30.0.12.

    (Sorry for the summary. I am working on an article to explain this in more detail. But hope this helps so far)

    Login or Signup to reply.
  3. Is it possible to see other containers which are running in the same host from the intermediate containers which would be created by Dockerfile?

    No. (*)

    The biggest technical challenge is that image builds run on the "default bridge network" where every Compose network is a "user-defined bridge network". Since the image builds aren’t in the same network as the thing you’re trying to connect to, they won’t be able to connect.

    You also claim the database is already running. Compose doesn’t have any primitives to guarantee that image builds happen in any particular order. So even if the application depends_on: [the-database] there’s no guarantee the database will be running before the application image is built.

    I need to connect to my dockerized database, it is already up and running. From my Dockerfile How can I connect to it? is it possible or not?

    This doesn’t really make sense. Consider a binary like, say, /usr/bin/psql: even if you’re connecting to different databases with different schemas and credentials, you don’t need to rebuild the database client. In the same way, your Docker image shouldn’t have "compiled in" the contents of any single database.

    Imagine running:

    # Set up the database and preload data
    docker-compose up -d db
    PGHOST=localhost PGPORT=5432 ./populate-db.sh
    
    # Build the image
    docker-compose build
    
    # Clean absolutely everything up
    docker-compose down
    docker-compose rm -v
    
    # Start over, with the prebuilt image but an empty database
    docker-compose up -d
    

    Or, if this sequence sounds contrived, imagine docker-compose push the image to a registry and then running it on a different system that doesn’t have the database set up yet.

    The usual approach to this is to run any sort of content generators outside of your image build process and check them into your source tree. If you need to run tasks like migrations you can run them in an entrypoint wrapper script, as the container starts up but before it actually executes the server; How do you perform Django database migrations when using Docker-Compose? has some options that are largely not framework-specific.

    (*) On a MacOS or Windows host, you might be able to use the magic host.docker.internal host name to call back out to the host, which would then be able to reach other containers via their published ports:. On a Linux system, building the image on the host network would be a different hack to achieve this. In addition to this not being portable across Docker installations, the other considerations above still apply.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search