skip to Main Content

Our react app is configured to build and deploy using the CRA scripts and Bitbucket Pipelines.

Most of our builds are failing from running yarn build with the following error:

error Command failed with exit code 137.

This is an out of memory error.

We tried setting GENERATE_SOURCEMAP=false as a deployment env variable but that did not fix the issue https://create-react-app.dev/docs/advanced-configuration/.

We also tried setting the max memory avialable for a step by running the following:

node --max-old-space-size=8192 scripts/build.js

Increasing to max memory did not resolve the issue.

This is blocking our development and we aren’t sure what to do to resolve the issue.

We could move to a new CI/CD service but that is a lot more work than desired.

Are there other solutions that could solve this problem?

Below is the bitbucket-pipelines.yml file

image: node:14

definitions:
  steps:
    - step: &test
        name: Test
        script:
          - yarn
          - yarn test --detectOpenHandles --forceExit --changedSince $BITBUCKET_BRANCH
    - step: &build
        name: Build
        size: 2x
        script:
          - yarn
          - NODE_ENV=${BUILD_ENV} yarn build
        artifacts:
            - build/**
    - step: &deploy_s3
        name: Deploy to S3
        script:
          - pipe: atlassian/aws-s3-deploy:0.3.8
            variables:
              AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
              AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
              AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
              S3_BUCKET: $S3_BUCKET
              LOCAL_PATH: "./build/"
              ACL: 'public-read'
    - step: &auto_merge_down
        name: Auto Merge Down
        script:
          - ./autoMerge.sh stage || true
          - ./autoMerge.sh dev || true
  caches:
    jest: /tmp/jest_*
    node-dev: ./node_modules
    node-stage: ./node_modules
    node-release: ./node_modules
    node-prod: ./node_modules


pipelines:
  branches:
    dev:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-dev
                  - jest
                <<: *test
            - step:
                caches:
                  - node-dev
                <<: *build
                deployment: Dev Env
      - step:
          <<: *deploy_s3
          deployment: Dev
    stage:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-stage
                  - jest
                <<: *test
            - step:
                caches:
                  - node-stage
                <<: *build
                deployment: Staging Env
      - step:
          <<: *deploy_s3
          deployment: Staging
    prod:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-prod
                  - jest
                <<: *test
            - step:
                caches:
                  - node-prod
                <<: *build
                deployment: Production Env
      - parallel:
          steps:
            - step:
                <<: *deploy_s3
                deployment: Production
            - step:
                <<: *auto_merge_down

3

Answers


  1. Chosen as BEST ANSWER

    Turns out the terser-webpack-plugin package was running max workers for jest workers during our yarn build step causing the out of memory error https://www.npmjs.com/package/terser-webpack-plugin

    By removing that plugin from our package.json, it no longer fails the build and the jest workers are no longer spawned during the build.

    You can also set paralell to false in the config for TerserWebpackPlugin to not spawn workers.

    This seems incorrect and is causing our pipeline and likley others to go out of memory.


  2. try adding the following deffinition:

    definitions:
      services:
        docker:
          memory: 4096
    

    found it when we had some simular issues like: https://confluence.atlassian.com/bbkb/bitbucket-pipeline-execution-hangs-on-docker-build-step-1189503836.html

    Edit:
    sorry my bad, no you don’t need docker. Note that the memory allocated is shared by both the script in the step and any services on the step, so maybe remove the parallel and let jest run on it’s own before you start the build it can be a bit of memory hog. If you do must run in parallel at least limit the impact of jest by running tests sequentially jest –runInBand or via lower number of workers jest –maxWorkers=4

    Login or Signup to reply.
  3. You can use even bigger builders with size: 4x and size: 8x but only with your self-hosted pipeline runners, which obviously will need at least 16GB

    https://support.atlassian.com/bitbucket-cloud/docs/step-options/#Size

    definitions:
      anchors:
    
        - &build-step
            name: Build
            size: 4x
            runs-on: 
              - 'self.hosted'
              - 'my.custom.label'
            script:
              - yarn
              - NODE_ENV=${BUILD_ENV} yarn build
            artifacts:
                - build/**
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search