I have the following situation: I want to have a Terraform configuration in which I:
- Create a EKS cluster
- Install some manifests to it using
kubectl_manifest
resource.
So I essence, I have the following configuration:
... all the IAM stuff needed for the cluster
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.50"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
}
}
provider "aws" { }
provider "kubectl" {
config_path = "~/.kube/config"
config_context = aws_eks_cluster.this.arn # tried this also, with no luck
}
resource "aws_eks_cluster" "my-cluster" {
... the cluster configuration
}
... node group yada yada..
resource "null_resource" "update_kubeconfig" {
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name my-cluster --region us-east-1
}
}
resource "kubectl_manifest" "some_manifest" {
yaml_body = <some_valid_yaml>
depends_on = [null_resource.update_kubeconfig]
}
So my hope is that after the null_resource.update_kubeconfig
runs, and updates the .kube/config
(which it does; checked), the kubectl_manifest.some_manifest
will pick up on it and use the newly updated configuration. But it doesn’t.
I don’t have currently the error message in hand, but essentially what happens is: It tries to communicate the previously created cluster (I have previously – now not existing – clusters in the kubeconfig). Then it throws an error of "can’t resolve DNS name" of the old cluster.
So it seems that the kubeconfig file is loaded somewhere in the beginning of the run, and it isn’t being refreshed when the kubectl_manifest
resource is being created.
What is the right way to handle this?!
2
Answers
The
aws eks update-kubeconfig
command will add the cluster context to your kubeconfig file but it doesn’t switch to that context forkubectl
commands. You can solve this a few different ways.; kubectl config use-context my-cluster
. This will update the global context after the file has been updated.--kubeconfig ~/.kube/my-cluster
option to the local-exec command. Anykubectl
commands should then use this option which keeps your cluster config is a separate file.kubectl
commands to use--context my-cluster
which will set the cluster context only for that command.Any of those options should work and it depends on how you like to manage your config.
Personally, I prefer to keep my clusters in separate config files because they’re easier to share and I export
KUBECONFIG
in different shells instead of changing contexts back and forth.Since your Terraform code itself creating the Cluster, you can refer that in provider configuration,