skip to Main Content

I am trying to update my lambda function via terraform and gitlab ci/cd, but running into an issue where the ci/cd pipeline succeeds and the lambda code on AWS is not updating. I can see that the lambda function is being updated on AWS console (Last Modified shows the right time), but the code does not change. Here is how my code looks like:

.gitlab-ci.yml:

image:
  name: hashicorp/terraform:light
  entrypoint:
    - "/usr/bin/env"
    - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

#-------------------- Stages ------------------
stages:
  - import
  - validate
  - plan
  - apply

install_dependencies:
  image: node:18.19.0
  stage: package
  script:
    - cd lambdas/notifications
    - npm ci --cache --prefer-offline
    - cd ../maintenance
    - npm ci --cache --prefer-offline
  cache:
    key: $CI_COMMIT_REF_SLUG
    paths:
      - lambdas/notifications/node_modules/
      - lambdas/maintenance/node_modules/
  tags:
    - $CDP_K8S_LARGE

# DOCKER BUILD
build-dev-docker:
  extends: .kaniko-package
  stage: package
  variables:
    KANIKO_BUILD_ARGS: "--build-arg ENVIRONMENT=development --single-snapshot"
    EXTRA_DOCKER_TAG: "$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_PIPELINE_IID"      artifacts:
    paths:
      - lambdas/notifications/lambda_batch.zip
      - lambdas/maintenance/lambda_maintenance.zip
  only:
    refs:
      - develop
  environment:
    name: dev
  tags:
    - $CDP_K8S_LARGE

# TERRAFORM STAGES
import:
  stage: import
  before_script:
    - !reference [default, before_script]
    - source "$CI_PROJECT_DIR/before_script.sh"
  script:
    - terraform import aws_lambda_function.dev-notifications dev-notifications
    - terraform import aws_lambda_function.dev-maintenance dev-maintenance
  artifacts:
    paths:
      - terraform.tfstate

validate:
  stage: validate
  before_script:
    - !reference [default, before_script]
    - source "$CI_PROJECT_DIR/before_script.sh"
  script:
    - terraform validate
  artifacts:
    paths:
      - terraform.tfstate

plan:
  stage: plan
  before_script:
    - !reference [default, before_script]
    - source "$CI_PROJECT_DIR/before_script.sh"
  script:
    - terraform plan -out "planfile"
  dependencies:
    - build-dev-docker
    - validate
  artifacts:
    paths:
      - planfile
      - terraform.tfstate

apply:
  stage: apply
  before_script:
    - !reference [default, before_script]
    - source "$CI_PROJECT_DIR/before_script.sh"
  script:
    - terraform show -json
    - terraform apply -input=false "planfile"
  dependencies:
    - build-dev-docker
    - plan
  artifacts:
    paths:
      - terraform.tfstate

main.tf:

provider "aws" {
    region = "us-west-2"
}

resource "aws_lambda_function" "dev-notifications" {
    function_name = "dev-notifications"
    role = var.iam_role_arn
    handler = "index.handler"
    runtime = "nodejs20.x"

    filename = "${var.PROJECT_DIR}/lambdas/notifications/lambda_batch.zip"
    source_code_hash = filebase64sha256("${var.PROJECT_DIR}/lambdas/notifications/lambda_batch.zip")

    tags = {
        Application = "dev"
        Ou = "dev"
    }
}

output "notif_output" {
    value = aws_lambda_function.dev-notifications.function_name
}

resource "aws_lambda_function" "dev-maintenance" {
    function_name = "dev-maintenance"
    role = var.iam_role_arn
    handler = "index.handler"
    runtime = "nodejs20.x"

    filename = "${var.PROJECT_DIR}/lambdas/maintenance/lambda_maintenance.zip"
    source_code_hash = filebase64sha256("${var.PROJECT_DIR}/lambdas/maintenance/lambda_maintenance.zip")

    tags = {
        Application = "dev"
        Ou = "dev"
    }
}

output "maintenance_output" {
    value = aws_lambda_function.dev-maintenance.function_name
}

and finally I am containerizing everything so that docker will build the zip file.

Dockerfile:

FROM node:18-alpine

WORKDIR /dir
COPY . /dir

# Install TypeScript globally
RUN npm install -g typescript

WORKDIR /dir/lambdas/notifications
RUN npm ci && npm run all

WORKDIR /dir/lambdas/maintenance
RUN npm ci && npm run all

CMD ["npm", "run", "start"]

My npm run all basically does npm run build && del -f lambda_batch.zip && npm run zip for both lambda functions (just different names for the zip file). Everything seems to be succeeding in the pipeline but my lambda function isn’t updating.. Any assistance is greatly appreciated!

2

Answers


  1. Chosen as BEST ANSWER

    After a bunch of trial and error, I finally managed to figure it out by using artifacts in gitlab. Basically I had to add artifacts to the docker stage by doing:

    build-dev-docker:
      extends: .kaniko-package
      stage: package
      variables:
        KANIKO_BUILD_ARGS: "--build-arg ENVIRONMENT=development --single-snapshot"
        EXTRA_DOCKER_TAG: "$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_PIPELINE_IID"
      artifacts:
        paths:
          - lambdas/batched_notifications/lambda_batch.zip
          - lambdas/maintenance/lambda_maintenance.zip
      only:
        refs:
          - develop
      environment:
        name: dev
      tags:
        - $CDP_K8S_LARGE
    

    The important piece is to add the path to the zip file in the artifacts section and ALSO you have to copy the zip files in your Dockerfile. If you only do this, it'll still grab the old zip files and not the zip files that are generated in the docker container. So also remember to do this in the dockerfile:

    FROM node:18-alpine
    
    WORKDIR /dir
    COPY . /dir
    
    # Install TypeScript globally
    RUN npm install -g typescript
    
    WORKDIR /dir/lambdas/notifications
    RUN npm ci && npm run all
    RUN cp lambda_batch.zip /builds/dir/lambdas/notifications
    
    WORKDIR /dir/lambdas/maintenance
    RUN npm ci && npm run all
    RUN cp lambda_maintenance.zip /builds/dir/lambdas/maintenance
    
    CMD ["npm", "run", "start"]
    

    Another alternative that does work is doing npm run all in the script, so like this:

    install_dependencies:
      image: node:18.19.0
      stage: package
      script:
        - cd lambdas/notifications
        - npm ci --cache --prefer-offline
        - npm run all
        - cd ../maintenance
        - npm ci --cache --prefer-offline
        - npm run all
      cache:
        key: $CI_COMMIT_REF_SLUG
        paths:
          - lambdas/notifications/node_modules/
          - lambdas/maintenance/node_modules/
      tags:
        - $CDP_K8S_LARGE
    

    I also want to address that using terraform via archive_file does work but the issue I ran into that was the lambda function code on AWS Lambdas would be in typescript rather than javascript (since my lambda functions are in typescript and the npm run all converts the zip file into javascript), so that's why I ended up going the docker route!


  2. If the file is not in the current working directory you’ll need to use path.module instead of var.PROJECT_DIR like this

    resource "aws_lambda_function" "dev-maintenance" {
        function_name = "dev-maintenance"
        role = var.iam_role_arn
        handler = "index.handler"
        runtime = "nodejs20.x"
    
        filename = "${path.module}/lambdas/maintenance/lambda_maintenance.zip"
        source_code_hash = filebase64sha256("${path.module}/lambdas/maintenance/lambda_maintenance.zip")
    
        tags = {
            Application = "dev"
            Ou = "dev"
        }
    }
    

    Also, I’m not sure why are you zipping the file via npm config. However, you can do it via terraform using archive_file data source,

    data "archive_file" "lambda_archive" {
      type        = "zip"
      source_file = "${path.module}/lambdas/maintenance/lambda_maintenance.js" 
      output_path = "${path.module}/lambdas/maintenance/lambda_maintenance.zip"
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search