skip to Main Content

I have a nodejs app and i have a routine to daily backup a postgres database using spawn to run db_dumpall.

The question is how do i execute a command from a container to the other.

I have tried enabling ssh in both containers but i cant connect them automatically using password or public key.

NODEJS FUNCTION:

    const backupDB = (path: string) => {
        return new Promise((resolve, reject) => {
    
            const backupProcess = spawn('bash', ['-c', `docker exec ${dbOptions.container} pg_dumpall -U ${dbOptions.user} > ${path}`]);
// THIS WORKS WELL IF I EXECUTE THIS FUNTION OUTSIDE THE CONTAINER
    
            backupProcess.on('exit', (code, signal) => {
                resolve(code)
                if (code)
                    console.log('Backup process exited with code ', code);
                else if (signal)
                    console.error('Backup process was killed with singal ', signal);
                else
                    console.log('Successfully backedup the database')
            });
    
            backupProcess.on('error', (error) => {
                console.log(error)
                resolve('OcurriĆ³ un error al hacer el backup de la base de datos')
            });
        })
    };

DOCKER COMPOSE FILE:

  nodejs-server:
    image: nodejs-server
    build:
      context: ../backend_apollo_server_express
      dockerfile: Dockerfile
    ports:
      - "4000:4000"
    environment:
      -  PROTOCOL=http://
      -  HOST=localhost
      -  PORT=4000
      -  JWT_SECRET=appsecret321
      -  JWT_EXPIRESIN=300
      -  WORKER_POOL_ENABLED=0
      -  DB_NAME=lims
      -  DB_USER=lims
      -  DB_PASSWORD=lims
      -  CONTAINER_NAME=frontend_postgres_1
      -  DB_SCHEMA=public
      - "DATABASE_URL=postgresql://lims:lims@postgres/lims?schema=public"
    depends_on:
      - postgres
    volumes:
       - ../backend_apollo_server_express:/usr/src/app
       - "/etc/timezone:/etc/timezone:ro"
       - "/etc/localtime:/etc/localtime:ro"
       - app-volume:/root/.ssh
 
  postgres:
    container_name: db_postgres
    command: sh -c "service ssh start && runuser -u postgres postgres"
    image: postgresc
    build:
      context: ../backend_apollo_server_express
      dockerfile: Dockerfile.database
    environment:
      - "POSTGRES_USER=lims"
      - "POSTGRES_PASSWORD=lims"
    volumes:
      - /home/javier/lims/dockerVolumes/db:/var/lib/postgresql/data
      - "/etc/timezone:/etc/timezone:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - app-volume:/usr/src/shared-volume

    ports:
      - 5434:5432

volumes:
  app-volume:

EDIT 13/04/2022:

I am implementing @David Maze approach to solve this, but I have found two problems.

1- pg_dump and pg_dumpall dont accept the password as a parameter, so i have to use .pgpass. The problem is that psql and pg_dump works as expected without asking for the password but pg_dumpall still ask for password and i dont undestand why.

.pgpass:

 postgres:5432:lims:lims:lims
 // postgres is the docker compose network alias for the container

.env

PGPASSFILE=/usr/src/app/db/.pgpass 
//Using PGPASSFILE environment variable to pass the .pgpass file

2- I need to spawn the proccess from nodejs i have tried different ways but always receive exit with code 1, and cant see the error message.

First approach

 const backupProcess = spawn('pg_dump', [
            `-h postgres`,
            `-U lims`,
            `-d lims`,
            `-f ./someFile.sql`
        ]);

Second approach

 const backupProcess = spawn('pg_dump', ['-c',
            `-h postgres -U lims -d lims-f ./someFile.sql`
        ]);

2

Answers


  1. You actually can but this is fishy as you should enable one container to have control of others, meanning that if compromised you could be in trouble.

    There are two ways around this:
    1You actually can but this is fishy as you should enable one container to have control of others, meanning that if compromised you could be in trouble.

    There are two ways around this:

    1. Using docker exec to execute from your host a command on your docker (using crontab for example)
    2. You could put this command inside a docker with a crontab daemon inside you’ll just have to establish a connection to your DB inside this docker.
    Login or Signup to reply.
  2. One container can’t run a command in another container. At the same time, most relational databases are designed so that you can communicate with them over TCP; you can talk to the database from outside its container and you don’t need Docker-specific tricks to talk to it.

    For your use case, the important detail is that pg_dumpall takes --host, --port, --username, and similar parameters, and also honors the standard PostgreSQL environment variables like $PGHOST and $PGUSER. That means that, if pg_dumpall is in the same image as your node application, then you can use the normal Node child_process API to just run it.

    In your Dockerfile, you need to install the PostgreSQL command-line tools. The default node image is Debian based, so this will look something like

    FROM node:16
    RUN apt-get update 
     && DEBIAN_FRONTEND=noninteractive 
        apt-get install --no-install-recommends --assume-yes 
          postgresql-client
    
    WORKDIR /app
    COPY package*.json ./
    ...
    

    When you go to run it, you can just run pg_dumpall as a subprocess. You do not need docker exec, an ssh connection, or anything else. Note that this will run in the Node container but that’s probably not a problem.

    import { open } from 'fs/promises';
    import { spawn } from 'child_process';
    import { Writable } from 'stream';
    
    const runPgDumpall = async (stream: Writable) => {
      const subprocess = spawn('pg_dumpall', [], {
        stdio: ['inherit', stream, 'inherit']
      });
      return new Promise((resolve, reject) => {
        subprocess.on('exit', (code, signal) => resolve(code));
        subprocess.on('error', err => reject(err));
      });
    };
    
    const backupDB = async (path: string) => {
      const fh = await open(path, 'w');
      const stream = fh.createWriteStream();
      try {
        return await runPgDumpall(stream);
      } finally {
        stream.end();
      }
    }
    

    Finally, in the docker-compose.yml file you need to give the Node application details on how to contact the database container. You can pass these using environment variables.

    version: '3.8'
    services:
      nodejs-server:
        build: ../backend_apollo_server_express
        environment:
          - PGHOST=postgres
          - PGUSER=lims
          - PGPASSWORD=lims
    
      postgres:
        image: 'postgres:13'
        environment:
          - POSTGRES_USER=lims
          - POSTGRES_PASSWORD=lims
        volumes:
          - /home/javier/lims/dockerVolumes/db:/var/lib/postgresql/data
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search