skip to Main Content

I current have code that I have been using for quiet sometime that calls a custom S3 module. Today I tried to run the same code and I started getting an error regarding the provider.

╷ │ Error: Failed to query available provider packages │ │ Could not
retrieve the list of available versions for provider hashicorp/s3:
provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/s3 │ │ All modules should specify
their required_providers so that external consumers will get the
correct providers when using a module. To see which modules │ are
currently depending on hashicorp/s3, run the following command: │
terraform providers

Doing some digging seems that terraform is looking for a module registry.terraform.io/hashicorp/s3, which doesn’t exist.

So far, I have tried the following things:

  • Validated that the S3 Resource code meets the standards of the upgrade Hashicorp did to 4.x this year. Plus I have been using it for a couple of months with no issues.
  • Delete .terraform directory and rerun terraform init (No success same error)
  • Delete .terraform directory and .terraform.hcl lock and run terraform init -upgrade (No Success)
  • I have tried to update my provider’s file to try to force an upgrade (no Success)
  • I tried to change the provider to >= current version to pull the latest version with no success

Reading further, it refers to a caching problem of the terraform modules. I tried to run terraform providers lock and received this error.

Error: Could not retrieve providers for locking │ │ Terraform failed
to fetch the requested providers for darwin_amd64 in order to
calculate their checksums: some providers could not be installed: │ –
registry.terraform.io/hashicorp/s3: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/s3.

Kind of at my wits with what could be wrong. below is a copy of my version.tf which I changed from providers.tf based on another post I was following:

version.tf

# Configure the AWS Provider
provider "aws" {
  region            = "us-east-1"
  use_fips_endpoint = true
}


terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.9.0"
    }

    local = {
      source  = "hashicorp/local"
      version = "~> 2.2.1"

    }
  }

  required_version = ">= 1.2.0" #required terraform version
}

S3 Module
I did not include locals, outputs, or variables unless someone thinks we need to see them. As I said before, the module was running correctly until today. Hopefully, this is all you need for the provider’s issue. Let me know if other files are needed.

resource "aws_s3_bucket" "buckets" {
  count         = length(var.bucket_names)
  bucket        = lower(replace(replace("${var.bucket_names[count.index]}-s3", " ", "-"), "_", "-"))
  force_destroy = var.bucket_destroy
  tags          = local.all_tags
}

# Set Public Access Block for each bucket
resource "aws_s3_bucket_public_access_block" "bucket_public_access_block" {
  count                   = length(var.bucket_names)
  bucket                  = aws_s3_bucket.buckets[count.index].id
  block_public_acls       = var.bucket_block_public_acls
  ignore_public_acls      = var.bucket_ignore_public_acls
  block_public_policy     = var.bucket_block_public_policy
  restrict_public_buckets = var.bucket_restrict_public_buckets
}

resource "aws_s3_bucket_acl" "bucket_acl" {
  count  = length(var.bucket_names)
  bucket = aws_s3_bucket.buckets[count.index].id
  acl    = var.bucket_acl
}

resource "aws_s3_bucket_versioning" "bucket_versioning" {
  count  = length(var.bucket_names)
  bucket = aws_s3_bucket.buckets[count.index].id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
  count  = length(var.bucket_names)
  bucket = aws_s3_bucket.buckets[count.index].id
  rule {
    id = "${var.bucket_names[count.index]}-lifecycle-${count.index}"
    status = "Enabled"
    expiration {
      days = var.bucket_backup_expiration_days
    }
       
    transition {
      days          = var.bucket_backup_days
      storage_class = "GLACIER"
    }
  }
}

# AWS KMS Key Server Encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_encryption" {
  count  = length(var.bucket_names)
  bucket = aws_s3_bucket.buckets[count.index].id
  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.bucket_key[count.index].arn
      sse_algorithm     = var.bucket_sse
    }
  }
}

Looking for any other ideas I can use to fix this issue. thank you!!

2

Answers


  1. Chosen as BEST ANSWER

    My colleague and I finally found the problem. Turns out that we had a data call to the S3 bucket. Nothing was wrong with the module but the place I was calling the module had a local.tf action where I was calling s3 in a legacy format see the change below:

    WAS

    data "s3_bucket" "MyResource" {} 
    

    TO

    data "aws_s3_bucket" "MyResource" {}
    

    Appreciate the responses from everyone. Resource was the root of the problem but forgot that data is also a resource to check.


  2. Although you haven’t included it in your question, I’m guessing that somewhere else in this Terraform module you have a block like this:

    resource "s3_bucket" "example" {
    
    }
    

    For backward compatibility with modules written for older versions of Terraform, terraform init has some heuristics to guess what provider was intended whenever it encounters a resource that doesn’t belong to one of the providers in the module’s required_providers block. By default, a resource "belongs to" a provider by matching the prefix of its resource type name — s3 in this case — to the local names chosen in the required_providers block.

    Given a resource block like the above, terraform init would notice that required_providers doesn’t have an entry s3 = { ... } and so will guess that this is an older module trying to use a hypothetical legacy official provider called "s3" (which would now be called hashicorp/s3, because official providers always belong to the hashicorp/ namespace).

    The correct name for this resource type is aws_s3_bucket, and so it’s important to include the aws_ prefix when you declare a resource of this type:

    resource "aws_s3_bucket" "example" {
    
    }
    

    This resource is now by default associated with the provider local name "aws", which does match one of the entries in your required_providers block and so terraform init will see that you intend to use hashicorp/aws to handle this resource.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search