skip to Main Content

I have the following situation: I want to have a Terraform configuration in which I:

  1. Create a EKS cluster
  2. Install some manifests to it using kubectl_manifest resource.

So I essence, I have the following configuration:

... all the IAM stuff needed for the cluster

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.50"
    }

    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.7.0"
    }


  }
}

provider "aws" { }

provider "kubectl" {
    config_path = "~/.kube/config"
    config_context = aws_eks_cluster.this.arn # tried this also, with no luck
}


resource "aws_eks_cluster" "my-cluster" {
... the cluster configuration
}
... node group yada yada..

resource "null_resource" "update_kubeconfig" {
   provisioner "local-exec" {
      command = "aws eks update-kubeconfig --name my-cluster --region us-east-1
   }
}

resource "kubectl_manifest" "some_manifest" {
   yaml_body = <some_valid_yaml>

   depends_on = [null_resource.update_kubeconfig]
}

So my hope is that after the null_resource.update_kubeconfig runs, and updates the .kube/config (which it does; checked), the kubectl_manifest.some_manifest will pick up on it and use the newly updated configuration. But it doesn’t.

I don’t have currently the error message in hand, but essentially what happens is: It tries to communicate the previously created cluster (I have previously – now not existing – clusters in the kubeconfig). Then it throws an error of "can’t resolve DNS name" of the old cluster.

So it seems that the kubeconfig file is loaded somewhere in the beginning of the run, and it isn’t being refreshed when the kubectl_manifest resource is being created.

What is the right way to handle this?!

2

Answers


  1. The aws eks update-kubeconfig command will add the cluster context to your kubeconfig file but it doesn’t switch to that context for kubectl commands. You can solve this a few different ways.

    1. Update your local-exec command to include ; kubectl config use-context my-cluster. This will update the global context after the file has been updated.
    2. Add a --kubeconfig ~/.kube/my-cluster option to the local-exec command. Any kubectl commands should then use this option which keeps your cluster config is a separate file.
    3. Update your kubectl commands to use --context my-cluster which will set the cluster context only for that command.
    4. Update your kubectl provider to use the config_context with the value aws_eks_cluster.this.name. Contexts in the kubeconfig file are based on cluster names and not AWS ARNs.

    Any of those options should work and it depends on how you like to manage your config.

    Personally, I prefer to keep my clusters in separate config files because they’re easier to share and I export KUBECONFIG in different shells instead of changing contexts back and forth.

    Login or Signup to reply.
  2. Since your Terraform code itself creating the Cluster, you can refer that in provider configuration,

    data "aws_eks_cluster_auth" "main" {
      name = aws_eks_cluster.my-cluster.name
    }
    
    provider "kubernetes" {
      host = aws_eks_cluster.my-cluster.endpoint
    
      token                  = data.aws_eks_cluster_auth.main.token
      cluster_ca_certificate = base64decode(aws_eks_cluster.my-cluster.certificate_authority.0.data)
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search