I have two environments: ‘dev’ and ‘prod’. I want to create AWS s3 bucket for each one using different configurations of lifecycle: ‘false’ for ‘dev’ and ‘true’ for ‘prod’. Usually I use variables. But Terraform does not allow create a variable for ‘lifecycle’. So I got little bit stuck. Please, help!
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-state-vpc-${var.env}"
lifecycle {
prevent_destroy = true
}
tags = {
Name = "s3-state-file-${var.env}"
}
}
Only I could came across is this. At the same file create:
resource "aws_s3_bucket" "terraform_state" {
count = var.env == "prod" ? 1 : 0 # Create only if environment is 'prod'
bucket = "terraform-state-vpc-${var.env}"
lifecycle {
prevent_destroy = true
}
tags = {
Name = "s3-state-file-${var.env}"
}
}
and
resource "aws_s3_bucket" "terraform_state" {
count = var.env == "dev" ? 1 : 0 # Create only if environment is 'dev'
bucket = "terraform-state-vpc-${var.env}"
lifecycle {
prevent_destroy = false
}
tags = {
Name = "s3-state-file-${var.env}"
}
}
but I can’t create another resource for ‘dev’ environment. Because I am getting an error: ‘Error: Duplicate resource "aws_s3_bucket" configuration’.
Changing name of the bucket: "terraform_state" could solve it. But I need to use the same name.
2
Answers
It seems that there is no support for variables in
prevent_destroy
yet, even though this feature was requested a long time ago (August 2019).Regarding
<RESOURCE TYPE>.<NAME>
represents a managed resource of the given type and name. Names must obviously be unique, otherwise when usingaws_s3_bucket.terraform_state
how is Terraform supposed to know if you’re referencing the dev or prod resource?You’ll have to use different names for each resource, such as
terraform_state_prod
andterraform_state_dev
:The
prevent_destroy
feature in Terraform is quite dubious in that it’s only enforced inside Terraform itself and only if theprevent_destroy = true
argument is present in the configuration at the time the destroy is being planned. Terraform continues to support it for backward compatibility, but there are better ways to protect a valuable object from being destroyed and so I would not recommend using it in new Terraform modules.For S3 buckets in particular one option is to do nothing special at all: The Amazon S3 API won’t let you delete a bucket that has objects in it anyway, so you’d only be able to delete either of these buckets by first deleting all of the objects from them. Therefore Amazon S3 has a form of bucket deletion protection built in, without you needing any special configuration in Terraform.
If that built-in protection is not sufficient for some reason then by far the most robust option is to use an IAM policy that’s managed in a different Terraform configuration (so that changes to this configuration can’t accidentally destroy or disable it) which denies access to the
s3:DeleteBucket
action, which means that the S3 API will reject any attempt to delete that bucket regardless of whether it has any objects.Deletion protection enforced by the remote API is far superior to deletion protection enforced client-side, because it’s harder to accidentally disable or skip it.