skip to Main Content

I am building a Flask application in Python. I’m using SQLAlchemy to connect to PostgreSQL.

In the flask application, I’m using this to connect SQLAlchemy to PostgreSQL

engine = create_engine('postgresql://postgres:[mypassword]@db:5432/employee-manager-db')

And this is my docker-compose.yml

version: '3.8'
services:
  backend:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - 8000:8000
    volumes:
      - .:/app
    links:
      - db:db
    depends_on:
      - pgadmin

  db:
    image: postgres:14.5
    restart: always
    volumes:
      - .dbdata:/var/lib/postgresql
    hostname: postgres
    environment:
      POSTGRES_PASSWORD: [mypassword]
      POSTGRES_DB: employee-manager-db

  pgadmin:
    image: 'dpage/pgadmin4'
    restart: always
    environment:
      PGADMIN_DEFAULT_EMAIL: [myemail]
      PGADMIN_DEFAULT_PASSWORD: [mypassword]
    ports:
      - "5050:80"
    depends_on:
      - db

I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying

conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again

Does anybody know how to fix this? Thank you so much for your help

2

Answers


  1. Your containers are on different networks and that is why they don’t see each other.

    When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.

    When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.

    There are several ways to fix this, but this one will serve you in many other scenarios:

    Run docker container ls and identify the name or ID of the db container that was started with docker-compose

    Then run your container with:

    # ID_or_name from the previous point
    docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
    

    This attached the new container to the same network as your database container.

    Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.

    Login or Signup to reply.
  2. docker run doesn’t read any of the information in the docker-compose.yml file, and it doesn’t see things like the Docker network that Compose automatically creates.

    In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it

    docker-compose build
    docker-compose up -d # will delete and recreate changed containers
    

    (If the name of the image is important to you – maybe you’re pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I’d also avoid replacing the image’s content with volumes:, since this misses any setup or modification that’s done in the Dockerfile and it means you’re running untested code if you ever deploy the image without the mount.)

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search