skip to Main Content

I have converted a legacy react app from using Webpack 3 to use Next.js 12 and Webpack 5.

I am currently trying to deploy the project using Docker through bitbucket pipelines but when running next build it gets stuck on ‘Creating an optimized production build’ and eventually runs out of memory and the build fails.

I am using the same Dockerfile setup from next.js example and the docker build runs perfectly on my local machine with the same steps.

Has anyone experience a similar issue? No errors or anything are shown during yarn install or the build itself and I have outputStandalone set to true.

# Install dependencies only when needed

FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile

# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

ENV NEXT_TELEMETRY_DISABLED 1

RUN yarn build

# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app

ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/package.json ./package.json

# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT 3000

CMD ["node", "server.js"]

2

Answers


  1. According to out of memory error, you could try setting Node memory before yarn build

    ENV NODE_OPTIONS=--max-old-space-size=4096
    
    Login or Signup to reply.
  2. I did run into a similar issue with npm@7, due to https://github.com/npm/cli/issues/2011 / https://github.com/npm/cli/issues/3208 , rant intended.

    A workaround was to increase the docker service memory limits from bitbucket-pipelines.yml (default is 1024 MB)

    definitions:
      services:
        docker:
          memory: 3072
    

    or possibly even doubling the resources allocated for your step:

    definitions:
      services:
        docker:
          memory: 7128
    pipelines:
      default:
        - step:
            size: 2x
            services:
              - docker
            script:
              - docker build .
    

    Beware that service resources are subtracted from those allocated for your step, so the step script would run on decreased memory.

    Also remember you will be charged x2 for your size: 2x steps.

    See https://support.atlassian.com/bitbucket-cloud/docs/databases-and-service-containers/#Service-memory-limits

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search