I have dockerized a NestJs application. But running it shows
Error: Error loading shared library /usr/src/app/node_modules/argon2/lib/binding/napi-v3/argon2.node: Exec format error
and sometimes it shows
Cannot find module ‘webpack’
Strangely, it works fine on Windows but the errors come up on mac and amazon linux.
Dockerfile
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM node:16-alpine As development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci
COPY . .
###################
# BUILD FOR PRODUCTION
###################
FROM node:16-alpine As build
WORKDIR /usr/src/app
COPY package*.json ./
COPY --from=development /usr/src/app/node_modules ./node_modules
COPY . .
RUN npm run build
ENV NODE_ENV production
RUN npm ci --only=production && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM node:16-alpine As production
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.9'
services:
api:
build:
dockerfile: Dockerfile
context: .
# Only will build development stage from our dockerfile
target: development
env_file:
- .env
volumes:
- api-data:/usr/src/app
# Run in dev Mode: npm run start:dev
command: npm run start:dev
ports:
- 3000:3000
depends_on:
- postgres
restart: 'always'
networks:
- prism-network
postgres:
image: postgres:14-alpine
environment:
POSTGRES_DB: 'prism'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'mysecretpassword'
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
healthcheck:
test:
[
'CMD-SHELL',
'pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}',
]
interval: 10s
timeout: 5s
retries: 5
networks:
- prism-network
networks:
prism-network:
volumes:
api-data:
postgres-data:
I am stumped, why it isn’t working.
3
Answers
Try this Dockerfile
docker-compose.yml
Change (or check) two things in your setup:
In your
docker-compose.yml
file, delete thevolumes:
block that overwrites your application’s/usr/src/app
directoryCreate a
.dockerignore
file next to theDockerfile
, if you don’t already have one, and make sure it includes the single lineWhat’s going on here? If you don’t have the
.dockerignore
line, then the DockerfileCOPY . .
line overwrites thenode_modules
tree from theRUN npm ci
line with your host’s copy of it; but if you have a different OS or architecture (for example, a Linux container on a Windows host) it can fail with the sort of error you show.The
volumes:
block is a little more subtle. This causes Docker to create a named volume, and the contents of the volume replace the entire/usr/src/app
tree in the image – in other words, you’re running the contents of the volume and not the contents of the image. But the first time (and only the first time) you run the container, Docker copies the contents of the image into the volume. So it looks like you’re running the image, and you have the same files, but they’re actually coming out of the volume. If you change the image the volume does not get updated, so you’re still running the old code.Without the
volumes:
block, you’re running the code out of the image, which is a standard Docker setup. You shouldn’t needvolumes:
unless your application needs to store persistent data (as your database container does), or for a couple of other specialized needs like injecting configuration files or reading out logs.i wouldn’t delete volumes because of the hot reloading.
try this in the volumes section to be able to save (persist) data