skip to Main Content

I have created ekscluster with a name called “prod”. I worked on this “prod” cluster after that i have deleted it. I have deleted all its associated vpc, interfaces, security groups everything. But if i try to create the ekscluster with the same name “prod” am getting this below error. Can you please help me on this issue?

[centos@ip-172-31-23-128 ~]$ eksctl create cluster --name prod
--region us-east-2 [ℹ] eksctl version 0.13.0 [ℹ] using region us-east-2 [ℹ] setting availability zones to [us-east-2b us-east-2c us-east-2a] [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19 [ℹ] subnets for us-east-2a - public:192.168.64.0/19 private:192.168.160.0/19 [ℹ] nodegroup "ng-1902b9c1" will use "ami-080fbb09ee2d4d3fa" [AmazonLinux2/1.14] [ℹ] using Kubernetes version 1.14 [ℹ] creating EKS cluster "prod" in "us-east-2" region with un-managed nodes [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks
--region=us-east-2 --cluster=prod' [ℹ] CloudWatch logging will not be enabled for cluster "prod" in "us-east-2" [ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2
--cluster=prod' [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod" in "us-east-2" [ℹ] 2 sequential tasks: { create cluster control plane "prod", create nodegroup "ng-1902b9c1" } [ℹ] building cluster stack "eksctl-prod-cluster" [ℹ] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-2
--name=prod' [✖] creating CloudFormation stack "eksctl-prod-cluster": AlreadyExistsException: Stack [eksctl-prod-cluster] already exists status code: 400, request id: 49258141-e03a-42af-ba8a-3fef9176063e Error: failed to create cluster "prod"

2

Answers


  1. There are two things to consider here.

    1. The delete command does not wait for all the resources to actually be gone. You should add the --wait flag in order to let it finish. It usually it takes around 10-15 mins.

    2. If that is still not enough you should make sure that you delete the CloudFormation object. It would look something like this (adjust the naming):

      #delete cluster:
      -delete cloudformation stack
      aws cloudformation list-stacks --query StackSummaries[].StackName
      aws cloudformation delete-stack --stack-name worker-node-stack
      aws eks delete-cluster --name EKStestcluster

    Please let me know if that helped.

    Login or Signup to reply.
  2. I was struggling with this error while Running EKS via Terraform – I’ll share my solution hopefully it will save other some valuable time.

    I tried to follow the references below but same result.

    Also I tried to setup different timeouts for delete and create – still didn’t help.

    Finally I was able to resolve this when I changed the create_before_destroy value inside the lifecycle block to false:

      lifecycle {
        create_before_destroy = false
      }
    

    (*) Notice – pods are still running on cluster during the update.


    References:

    Non-default node_group name breaks node group version upgrade

    Changing tags causes node groups to be replaced

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search