skip to Main Content

I am using gitlab cicd to deploy aws services into AWS using terraform. I have tested out the deployments from local to machine to AWS and the deployment has been successful. When code is pushed to gitlab, and the gitlab pipeline triggers i should be getting the same No changes message. However, i am facing the Error: Inconsistent dependency lock file issue explained below

Terraform local

I have pushed my changes into my gitlab repo which cicd has been setup with AWS. I have created .gitlab-ci.yml file that contains a build and deploy stage. Below is my gitlab-ci file

image:
  name: hashicorp/terraform:latest
  entrypoint: 
    - '/usr/bin/env' 
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

variables:
 TF_ROOT: ${CI_PROJECT_DIR}
 JQ_PLAN_FUNCTION: ' ( [.resource_changes[]?.change.actions?] | flatten) | {"create":(map(select(.=="create")) | length),"update":(map(select(.=="update")) | length),"delete":(map(select(.=="delete")) | length)} '
 PLAN_VAR_FILE: ' -var-file=${TF_ROOT}/env/${ENVIRONMENT}.tfvars -out=plan.cache '
 PLAN_BACKEND_CONFIG: ' -backend-config=address=${TF_ADDRESS} -backend-config=lock_address=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${ENVIRONMENT}_state_file/lock -backend-config=unlock_address=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${ENVIRONMENT}_state_file/lock -backend-config=username=${GITLAB_USERNAME} -backend-config=password=${GITLAB_PAT} -backend-config=lock_method=POST -backend-config=unlock_method=DELETE -backend-config=retry_wait_min=5 '

cache:
  key: "${TF_ROOT}"
  paths:
    - ${TF_ROOT}/.terraform/

before_script:

stages:
  - validation
  - build
  - deploy
  - destroy

validate:
  stage: validation
  allow_failure: true
  script:
    - cd "${TF_ROOT}"
    - pwd
    - terraform validate
  except:
    refs:
      - main
  only:
    changes:
      - "*.tf"
      - "**/*.tf"
      
build_dev:
  stage: build
  environment:
    name: dev
  # tags:
  #   - dev1   
  variables:
    ENVIRONMENT: dev
  before_script:
    - apk add --update --no-cache jq
    - git config --global url."https://oauth2:${GITLAB_PAT}@gitlab.com".insteadOf https://gitlab.com
  script:
    - cd "${TF_ROOT}"
    - rm -rf .terraform 
    - echo ${PLAN_BACKEND_CONFIG}
    - echo ${PLAN_VAR_FILE}
    - terraform init ${PLAN_BACKEND_CONFIG}
    - terraform plan ${PLAN_VAR_FILE}
    - JQ_PLAN=${JQ_PLAN_FUNCTION}
    - terraform show -json "${TF_ROOT}/plan.cache" | jq -r "${JQ_PLAN}" > "${TF_ROOT}/plan.json"
  artifacts:
    name: plan
    paths:
      - ${TF_ROOT}/plan.cache
      - ${TF_ROOT}/plan.json
    reports:
      terraform: ${TF_ROOT}/plan.json
    expire_in: 7 days
  rules:
    - if: '$CI_COMMIT_REF_NAME == "main"' # Plan in Main branch which will run AFTER the merge request is complete

deploy_dev:
  stage: deploy
  needs: [build_dev]
  # tags:
  #   - dev1   
  environment:
    name: dev
  script:
    - terraform apply -auto-approve -input=false ${TF_ROOT}/plan.cache
  allow_failure: false
  rules:
    - if: '$CI_COMMIT_REF_NAME == "main"' # Deploy AFTER the merge request is complete
      when: manual
  resource_group: ${ENVIRONMENT}
  interruptible: false

When the cicd pipeline is trigged at the build stage rather than stating No change. Your infrastructure matches the configuration.

it is trying to create the resources again. The build completes with success , it outputs Job succeeded the message As shown in the below code. Why would it need to recreate the same resources again when applying terraform plan?

Getting source from Git repository
00:29
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/d8552/ada-allwyn/.git/
Created fresh repository.
Checking out 9aab6ef0 as detached HEAD (ref is main)...
Skipping Git submodules setup
Restoring cache
00:01
Checking cache for /builds/d8552/ada-allwyn-6-protected...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted. 
Successfully extracted cache
Executing "step_script" stage of the job script
00:13
$ apk add --update --no-cache jq
fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/x86_64/APKINDEX.tar.gz
(1/2) Installing oniguruma (6.9.8-r0)
(2/2) Installing jq (1.6-r2)
Executing busybox-1.35.0-r29.trigger
OK: 24 MiB in 34 packages
$ git config --global url."https://oauth2:${GITLAB_PAT}@gitlab.com".insteadOf https://gitlab.com
$ cd "${TF_ROOT}"
$ rm -rf .terraform
$ echo ${PLAN_BACKEND_CONFIG}
-backend-config=address=https://gitlab.com/api/v4/projects/45305328/terraform/state/ada-allwyn -backend-config=lock_address=https://gitlab.com/api/v4/projects/45305328/terraform/state/dev_state_file/lock -backend-config=unlock_address=https://gitlab.com/api/v4/projects/45305328/terraform/state/dev_state_file/lock -backend-config=username=[MASKED] -backend-config=password=[MASKED]glpat- -backend-config=lock_method=POST -backend-config=unlock_method=DELETE -backend-config=retry_wait_min=5
$ echo ${PLAN_VAR_FILE}
-var-file=/builds/d8552/ada-allwyn/env/dev.tfvars -out=plan.cache
$ terraform init ${PLAN_BACKEND_CONFIG}
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
- eks in modules/eks
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 4.49.0"...
- Finding latest version of hashicorp/tls...
- Installing hashicorp/aws v4.64.0...
- Installed hashicorp/aws v4.64.0 (signed by HashiCorp)
- Installing hashicorp/tls v4.0.4...
- Installed hashicorp/tls v4.0.4 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Plan: 14 to add, 0 to change, 0 to destroy.
Changes to Outputs:
  + private_subnets_id = [
      + "subnet-",
      + "subnet-",
    ]
  + public_subnets_id  = [
      + "subnet-",
      + "subnet-",
    ]
  + vpc_id             = [
      + "vpc-",
    ]
─────────────────────────────────────────────────────────────────────────────
Saved the plan to: plan.cache
To perform exactly these actions, run the following command to apply:
    terraform apply "plan.cache"
$ JQ_PLAN=${JQ_PLAN_FUNCTION}
$ terraform show -json "${TF_ROOT}/plan.cache" | jq -r "${JQ_PLAN}" > "${TF_ROOT}/plan.json"
Saving cache for successful job
00:08
Creating cache /builds/d8552/ada-allwyn-6-protected...
/builds/d8552/ada-allwyn/.terraform/: found 15 matching artifact files and directories 
No URL provided, cache will not be uploaded to shared cache server. Cache will be stored only locally. 
Created cache
Uploading artifacts for successful job
00:03
Uploading artifacts...
/builds/d8552/ada-allwyn/plan.cache: found 1 matching artifact files and directories 
/builds/d8552/ada-allwyn/plan.json: found 1 matching artifact files and directories 
Uploading artifacts as "archive" to coordinator... 201 Created  id=4184752395 responseStatus=201 Created token=64_pn2Xx
Uploading artifacts...
/builds/d8552/ada-allwyn/plan.json: found 1 matching artifact files and directories 
Uploading artifacts as "terraform" to coordinator... 201 Created  id=4184752395 responseStatus=201 Created token=64_pn2Xx
Cleaning up project directory and file based variables
00:00
Job succeeded

At the deploy stage, the pipeline fails and it provides the following error Error: Inconsistent dependency. What changes do i need to make ?

$ terraform apply -auto-approve -input=false ${TF_ROOT}/plan.cache
╷
│ Error: Inconsistent dependency lock file
│ 
│ The following dependency selections recorded in the lock file are
│ inconsistent with the configuration in the saved plan:
│   - provider registry.terraform.io/hashicorp/aws: required by this configuration but no version is selected
│   - provider registry.terraform.io/hashicorp/tls: required by this configuration but no version is selected
│ 
│ A saved plan can be applied only to the same configuration it was created
│ from. Create a new plan from the updated configuration.
╵
╷
│ Error: Inconsistent dependency lock file
│ 
│ The given plan file was created with a different set of external dependency
│ selections than the current configuration. A saved plan can be applied only
│ to the same configuration it was created from.
│ 
│ Create a new plan from the updated configuration.
╵
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: command terminated with exit code 1

Any suggestions on what i can do to fix the problem? I have used the following doc as reference:

https://awstip.com/provisioning-infrastructure-on-aws-with-gitlab-using-gitlab-managed-terraform-state-d06c13c9efd1

2

Answers


  1. From what you’ve shown I understand that in GitLab CI each "stage" has an independent execution environment, and so if you want to share files between two stages those files either need to be in the source repository so that they will already be present when the job starts, or they need to be explicitly sent between stages using mechanisms like the artifacts setting.

    I think the problem here is that you haven’t committed your dependency lock file to version control and so when the "build_dev" stage runs terraform init it generates the dependency lock file only inline inside that job, as confirmed by the output you shared:

    $ terraform init ${PLAN_BACKEND_CONFIG}
    ...
    
    Terraform has created a lock file .terraform.lock.hcl to record the provider
    selections it made above. Include this file in your version control repository
    so that Terraform can guarantee to make the same selections by default when
    you run "terraform init" in the future.
    

    Because files don’t automatically propagate from build_dev to deploy_dev, the terraform apply command no longer has access to that lock file and so it notices that the configuration is different to how it was when you created the saved plan. Terraform is correct to report that the lock file is inconsistent, although it’s inconsistent in this case because it isn’t present at all, which Terraform treats the same way as the lock file being totally empty and thus not tracking a version for either of the providers you are using.

    To avoid this problem, you should run terraform init in your development environment and commit the generated .terraform.lock.hcl to your version control system as the terraform init output suggests. If you do that then both terraform plan and terraform apply will see a consistent lock file and so you won’t see errors like these ones.

    You can read more about how the dependency lock file works in its documentation: Dependency Lock File.

    Login or Signup to reply.
  2. Your local platform (presumably where you generated your lockfile and committed it) and the GitLab runner platform don’t match. To get around this, generate your lock file for all platforms you may use.

    For example to generate the lockfile for most modern platforms:

    terraform providers lock 
    -platform=windows_amd64 
    -platform=darwin_amd64 
    -platform=linux_amd64 
    -platform=darwin_arm64 
    -platform=linux_arm64
    

    Then commit the updated lock file.

    You also want to make sure you’re using remote state in your tf configuration, such as the gitlab HTTP backend.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search