skip to Main Content

I have dockerized a NestJs application. But running it shows

Error: Error loading shared library /usr/src/app/node_modules/argon2/lib/binding/napi-v3/argon2.node: Exec format error

and sometimes it shows
Cannot find module ‘webpack’

Strangely, it works fine on Windows but the errors come up on mac and amazon linux.

Dockerfile

###################
# BUILD FOR LOCAL DEVELOPMENT
###################

FROM node:16-alpine As development

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm ci

COPY . .

###################
# BUILD FOR PRODUCTION
###################

FROM node:16-alpine As build

WORKDIR /usr/src/app

COPY package*.json ./

COPY --from=development /usr/src/app/node_modules ./node_modules

COPY . .

RUN npm run build

ENV NODE_ENV production

RUN npm ci --only=production && npm cache clean --force

USER node

###################
# PRODUCTION
###################

FROM node:16-alpine As production

COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist

CMD [ "node", "dist/main.js" ]

docker-compose.yml

version: '3.9'

services:
    api:
        build:
            dockerfile: Dockerfile
            context: .
            # Only will build development stage from our dockerfile
            target: development
        env_file:
            - .env
        volumes:
            - api-data:/usr/src/app
        # Run in dev Mode: npm run start:dev
        command: npm run start:dev
        ports:
            - 3000:3000
        depends_on:
            - postgres
        restart: 'always'
        networks:
            - prism-network
    postgres:
        image: postgres:14-alpine
        environment:
            POSTGRES_DB: 'prism'
            POSTGRES_USER: 'postgres'
            POSTGRES_PASSWORD: 'mysecretpassword'
        volumes:
            - postgres-data:/var/lib/postgresql/data
        ports:
            - 5432:5432
        healthcheck:
            test:
                [
                    'CMD-SHELL',
                    'pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}',
                ]
            interval: 10s
            timeout: 5s
            retries: 5
        networks:
            - prism-network
networks:
    prism-network:

volumes:
    api-data:
    postgres-data:

I am stumped, why it isn’t working.

3

Answers


  1. Try this Dockerfile

    FROM node:16-alpine
    WORKDIR /usr/src/app
    COPY yarn.lock ./
    COPY package.json ./
    RUN yarn install
    COPY . .
    RUN yarn build
    CMD [ "node", "dist/main.js" ]
    

    docker-compose.yml

    version: "3.7"
    
    services:
      service_name:
        container_name: orders_service
        image: service_name:latest
        build: .
        env_file:
          - .env
        ports:
          - "3001:3001"
        volumes:
          - .:/data
          - /data/node_modules
    
    Login or Signup to reply.
  2. Change (or check) two things in your setup:

    1. In your docker-compose.yml file, delete the volumes: block that overwrites your application’s /usr/src/app directory

      services:
        api:
          build: { ... }
          # volumes:                 <-- delete
          #   api-data:/usr/src/app  <-- delete
      volumes:
        # api-data:                  <-- delete
        postgres-data:             # <-- keep
      
      
    2. Create a .dockerignore file next to the Dockerfile, if you don’t already have one, and make sure it includes the single line

      node_modules
      

    What’s going on here? If you don’t have the .dockerignore line, then the Dockerfile COPY . . line overwrites the node_modules tree from the RUN npm ci line with your host’s copy of it; but if you have a different OS or architecture (for example, a Linux container on a Windows host) it can fail with the sort of error you show.

    The volumes: block is a little more subtle. This causes Docker to create a named volume, and the contents of the volume replace the entire /usr/src/app tree in the image – in other words, you’re running the contents of the volume and not the contents of the image. But the first time (and only the first time) you run the container, Docker copies the contents of the image into the volume. So it looks like you’re running the image, and you have the same files, but they’re actually coming out of the volume. If you change the image the volume does not get updated, so you’re still running the old code.

    Without the volumes: block, you’re running the code out of the image, which is a standard Docker setup. You shouldn’t need volumes: unless your application needs to store persistent data (as your database container does), or for a couple of other specialized needs like injecting configuration files or reading out logs.

    Login or Signup to reply.
  3. i wouldn’t delete volumes because of the hot reloading.

    try this in the volumes section to be able to save (persist) data

    volumes:
            - /usr/src/app/node_modules
            - .:/usr/src/app
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search