skip to Main Content

Body:

I am using Terraform with an AWS S3 backend to manage my infrastructure. I have two folders (folder1 and folder2), and I am trying to deploy resources in folder2 while keeping the resources created in folder1. Both folders are configured to use the same S3 backend (same bucket and state file).

However, when I run terraform apply in folder2, Terraform is attempting to destroy resources that were created in folder1. I have verified that both folders point to the same S3 bucket and use the same key for the state file.

Here are the steps I followed:

  1. Created resources in folder1, which successfully created resources and stored the state in an S3 bucket.
  2. Configured folder2 with the same S3 backend configuration (same bucket, same key for the state file).
  3. Run terraform init and terraform apply in folder2 expecting Terraform to deploy new resources without deleting the existing ones from folder1.
  4. However, Terraform is attempting to delete the existing resources created in folder1

terraform show
in folder2 it shows all the resources created from folder1.

Are there any steps I can take to make sure Terraform in folder2 recognizes the resources in the shared state and does not attempt to destroy them?

Any guidance on this issue would be greatly appreciated!

2

Answers


  1. "Both folders are configured to use the same S3 backend (same bucket and state file)."

    What you are trying to do is not possible, that’s just not how Terraform works. Each "folder" of Terraform code needs to have a separate state file. The reason "folder 2" is trying to delete all the resources created by "folder 1" is because it is reading the state file and seeing all those resources, but it can’t find the definition of any of those resources in the Terraform code inside "folder 2" so it thinks you deleted those resources from the code, which is trigging it to delete the actual resources now.

    You either need to configure them to use different state files, or use Terraform Workspaces with a different workspace for each folder, which will also create separate state files automatically.

    Login or Signup to reply.
  2. The issue arises because Terraform is designed to manage the complete state of infrastructure defined within a single state file. When both folder1 and folder2 use the same backend configuration (same S3 bucket and key), Terraform interprets the state as belonging to the resources defined by the current folder’s configuration files. Resources not defined in the current folder are considered “orphaned” and are planned for destruction.

    To get rid of this issue you can simple just update the state key in one of them:

    backend "s3" {
      bucket = "same bucket"
      key    = "state/folder1/terraform.tfstate" # "state/folder2/terraform.tfstate"
      region = "same region"
    }
    

    then you can export a state from one terraform project to another, if you want to export the outputs for some shared variables, if that what you need from the idea of using the same state, as the below example:

    data "terraform_remote_state" "folder1" {
      backend = "s3"
      config = {
        bucket = "your-s3-bucket"
        key    = "state/folder1/terraform.tfstate"
        region = "your-region"
      }
    }
    
    resource "aws_other_resource" "example" {
      depends_on = [data.terraform_remote_state.folder1]
      some_property = data.terraform_remote_state.folder1.outputs.example_output
    }
    

    https://developer.hashicorp.com/terraform/language/state/remote-state-data

    so at the end, you for sure need one state per folder, but you can export the state between different project to use some variables.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search