skip to Main Content

I am facing an error while deploying deployment in CircleCI. Please find the configuration file below.

When running the kubectl CLI, we got an error between kubectl and the EKS tool of the aws-cli.

version: 2.1
orbs:
  aws-ecr: circleci/[email protected]
  docker: circleci/[email protected]
  rollbar: rollbar/[email protected]
  kubernetes: circleci/[email protected]
  deploy:
    version: 2.1
    orbs:
      aws-eks: circleci/[email protected]
      kubernetes: circleci/[email protected]
    executors:
      default:
        description: |
          The version of the circleci/buildpack-deps Docker container to use
          when running commands.
        parameters:
          buildpack-tag:
            type: string
            default: buster
        docker:
          - image: circleci/buildpack-deps:<<parameters.buildpack-tag>>
    description: |
      A collection of tools to deploy changes to AWS EKS in a declarative
      manner where all changes to templates are checked into version control
      before applying them to an EKS cluster.
    commands:
      setup:
        description: |
          Install the gettext-base package into the executor to be able to run
          envsubst for replacing values in template files.
          This command is a prerequisite for all other commands and should not
          have to be run manually.
        parameters:
          cluster-name:
            default: ''
            description: Name of the EKS Cluster.
            type: string
          aws-region:
            default: 'eu-central-1'
            description: Region where the EKS Cluster is located.
            type: string
          git-user-email:
            default: "[email protected]"
            description: Email of the git user to use for making commits
            type: string
          git-user-name:
            default: "CircleCI Deploy Orb"
            description:  Name of the git user to use for making commits
            type: string
        steps:
          - run:
              name: install gettext-base
              command: |
                if which envsubst > /dev/null; then
                  echo "envsubst is already installed"
                  exit 0
                fi
                sudo apt-get update
                sudo apt-get install -y gettext-base
          - run:
              name: Setup GitHub access
              command: |
                mkdir -p ~/.ssh
                echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts
                git config --global user.email "<< parameters.git-user-email >>"
                git config --global user.name "<< parameters.git-user-name >>"
          - aws-eks/update-kubeconfig-with-authenticator:
              aws-region: << parameters.aws-region >>
              cluster-name: << parameters.cluster-name >>
              install-kubectl: true
              authenticator-release-tag: v0.5.1
      update-image:
        description: |
          Generates template files with the specified version tag for the image
          to be updated and subsequently applies that template after checking it
          back into version control.
        parameters:
          cluster-name:
            default: ''
            description: Name of the EKS Cluster.
            type: string
          aws-region:
            default: 'eu-central-1'
            description: Region where the EKS Cluster is located.
            type: string
          image-tag:
            default: ''
            description: |
              The tag of the image, defaults to the  value of `CIRCLE_SHA1`
              if not provided.
            type: string
          replicas:
            default: 3
            description: |
              The replica count for the deployment.
            type: integer
          environment:
            default: 'production'
            description: |
              The environment/stage where the template will be applied. Defaults
              to `production`.
            type: string
          template-file-path:
            default: ''
            description: |
              The path to the source template which contains the placeholders
              for the image-tag.
            type: string
          resource-name:
            default: ''
            description: |
              Resource name in the format TYPE/NAME e.g. deployment/nginx.
            type: string
          template-repository:
            default: ''
            description: |
              The fullpath to the repository where templates reside. Write
              access is required to commit generated templates.
            type: string
          template-folder:
            default: 'templates'
            description: |
              The name of the folder where the template-repository is cloned to.
            type: string
          placeholder-name:
            default: IMAGE_TAG
            description: |
              The name of the placeholder environment variable that is to be
              substituted with the image-tag parameter.
            type: string
          cluster-namespace:
            default: sayway
            description: |
              Namespace within the EKS Cluster.
            type: string
        steps:
          - setup:
              aws-region: << parameters.aws-region >>
              cluster-name: << parameters.cluster-name >>
              git-user-email: [email protected]
              git-user-name: deploy
          - run:
              name: pull template repository
              command: |
                [ "$(ls -A << parameters.template-folder >>)" ] && 
                  cd << parameters.template-folder >> && git pull --force && cd ..
                [ "$(ls -A << parameters.template-folder >>)" ] || 
                  git clone << parameters.template-repository >> << parameters.template-folder >>
          - run:
              name: generate and commit template files
              command: |
                cd << parameters.template-folder >>
                IMAGE_TAG="<< parameters.image-tag >>"
                ./bin/generate.sh --file << parameters.template-file-path >> 
                  --stage << parameters.environment >> 
                  --commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" 
                  << parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" 
                  REPLICAS=<< parameters.replicas >>
          - kubernetes/create-or-update-resource:
              get-rollout-status: true
              namespace: << parameters.cluster-namespace >>
              resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >>
              resource-name: << parameters.resource-name >>
jobs:
  test:
    working_directory: ~/say-way/core
    parallelism: 1
    shell: /bin/bash --login
    environment:
      CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
      CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
      KONFIG_CITUS__HOST: localhost
      KONFIG_CITUS__USER: postgres
      KONFIG_CITUS__DATABASE: sayway_test
      KONFIG_CITUS__PASSWORD: ""
      KONFIG_SPEC_REPORTER: true
    docker:
    - image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest
      aws_auth:
        aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING
        aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING
    - image: circleci/redis
    - image: rabbitmq:3.7.7
    - image: circleci/mongo:4.2
    - image: circleci/postgres:10.5-alpine
    steps:
    - checkout
    - run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
    # This is based on your 1.0 configuration file or project settings
    - restore_cache:
        keys:
        - v1-dep-{{ checksum "Gemfile.lock" }}-
        # any recent Gemfile.lock
        - v1-dep-
    - run:
        name: install correct bundler version
        command: |
          export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')"
          echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV
          gem install bundler --version $BUNDLER_VERSION
    - run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3'
    - run:
        name: copy test.yml.sample to test.yml
        command: cp config/test.yml.sample config/test.yml
    - run:
        name: Precompile and clean assets
        command: bundle exec rake assets:precompile assets:clean
    # Save dependency cache
    - save_cache:
        key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }}
        paths:
        - vendor/bundle
        - public/assets
    - run:
        name: Audit bundle for known security vulnerabilities
        command: bundle exec bundle-audit check --update
    - run:
        name: Setup Database
        command: bundle exec ruby ~/sayway/setup_test_db.rb
    - run:
        name: Migrate Database
        command: bundle exec rake db:citus:migrate
    - run:
        name: Run tests
        command: bundle exec rails test -f
    # By default, running "rails test" won't run system tests.
    - run:
        name: Run system tests
        command: bundle exec rails test:system
    # Save test results
    - store_test_results:
        path: /tmp/circleci-test-results
    # Save artifacts
    - store_artifacts:
        path: /tmp/circleci-artifacts
    - store_artifacts:
        path: /tmp/circleci-test-results
  build-and-push-image:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: aws-ecr/default
    steps:
      - checkout
      - run:
          name: Pull latest core images for cache
          command: |
            $(aws ecr get-login --no-include-email --region $AWS_REGION)
            docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest"
      - docker/build:
          image: core
          registry: "${AWS_ECR_ACCOUNT_URL}"
          tag: "latest,${CIRCLE_SHA1}"
          cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest"
      - aws-ecr/push-image:
          repo: core
          tag: "latest,${CIRCLE_SHA1}"
  deploy-production:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: deploy/default
    steps:
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0
      - rollbar/notify_deploy_started:
          environment: report
      - deploy/update-image:
          resource-name: deployment/core-web
          template-file-path: core-web-pod.yml
          cluster-name: report
          environment: report
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 3
      - deploy/update-image:
          resource-name: deployment/core-worker
          template-file-path: core-worker-pod.yml
          cluster-name: report
          environment: report
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 4
      - deploy/update-image:
          resource-name: deployment/core-worker-batch
          template-file-path: core-worker-batch-pod.yml
          cluster-name: report
          environment: report
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 1
      - rollbar/notify_deploy_finished:
          deploy_id: "${ROLLBAR_DEPLOY_ID}"
          status: succeeded
  deploy-demo:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: deploy/default
    steps:
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0
      - rollbar/notify_deploy_started:
          environment: demo
      - deploy/update-image:
          resource-name: deployment/core-web
          template-file-path: core-web-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 2
      - deploy/update-image:
          resource-name: deployment/core-worker
          template-file-path: core-worker-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 1
      - deploy/update-image:
          resource-name: deployment/core-worker-batch
          template-file-path: core-worker-batch-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 1
      - rollbar/notify_deploy_finished:
          deploy_id: "${ROLLBAR_DEPLOY_ID}"
          status: succeeded
workflows:
  version: 2.1
  build-n-test:
    jobs:
      - test:
          filters:
            branches:
              ignore: master
  build-approve-deploy:
    jobs:
      - build-and-push-image:
          context: Core
          filters:
            branches:
              only: master
      - approve-report-deploy:
          type: approval
          requires:
            - build-and-push-image
      - approve-demo-deploy:
          type: approval
          requires:
            - build-and-push-image
      - deploy-production:
          context: Core
          requires:
            - approve-report-deploy
      - deploy-demo:
          context: Core
          requires:
            - approve-demo-deploy

21

Answers


  1. There is a problem with the latest kubectl and the aws-cli:
    https://github.com/aws/aws-cli/issues/6920

    Login or Signup to reply.
  2. We HAVE a fix here: https://github.com/aws/aws-cli/issues/6920#issuecomment-1119926885

    Update the aws-cli (aws cli v1) to the version with the fix:

    pip3 install awscli --upgrade --user
    

    For aws cli v2 see this.
    After that, don’t forget to rewrite the kube-config with:

    aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
    

    This command should update the kube apiVersion to v1beta1

    Login or Signup to reply.
  3. In case of Windows, first delete the configuration file in $HOME/.kube folder.

    Then run the aws eks update-kubeconfig --name command as suggested by bigLucas.

    Login or Signup to reply.
  4. An alternative is to update the AWS cli. It worked for me.

    The rest of the instructions are from the answer provided by bigLucas.

    Update the aws-cli (aws cli v2) to the latest version:

    winget install Amazon.AWSCLI
    

    After that, don’t forget to rewrite the kube-config with:

    aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
    

    This command should update the kube apiVersion to v1beta1.

    Login or Signup to reply.
  5. There is an issue in aws-cli. It is already fixed.


    Option 1:

    In my case, updating aws-cli + updating the ~/.kube/config helped.

    1. Update aws-cli (following the documentation)
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    unzip awscliv2.zip
    sudo ./aws/install --update
    
    1. Update the kube configuration
    mv ~/.kube/config ~/.kube/config.bk
    aws eks update-kubeconfig --region ${AWS_REGION}  --name ${EKS_CLUSTER_NAME}
    

    Option 2:

    Change v1alpha1 to v1beta1:

    diff ~/.kube/config ~/.kube/config-backup
    691c691
    <             apiVersion: client.authentication.k8s.io/v1beta1
    ---
    >             apiVersion: client.authentication.k8s.io/v1alpha1
    
    Login or Signup to reply.
  6. I just simplified the workaround by updating awscli to awscli-v2, but that also requires Python and pip to be upgraded. It requires minimum Python 3.6 and pip3.

    apt install python3-pip -y && pip3 install awscli --upgrade --user
    

    And then update the cluster configuration with awscli

    aws eks update-kubeconfig --region <regionname> --name <ClusterName>
    

    Output

    Added new context arn:aws:eks:us-east-1:XXXXXXXXXXX:cluster/mycluster to /home/dev/.kube/config
    

    Then check the connectivity with cluster

    dev@ip-10-100-100-6:~$ kubectl get node
    NAME                             STATUS   ROLES    AGE    VERSION
    ip-X-XX-XX-XXX.ec2.internal   Ready    <none>   148m   v1.21.5-eks-9017834
    
    Login or Signup to reply.
  7. Using kubectl 1.21.9 fixed it for me, with asdf:

    asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
    asdf install kubectl 1.21.9
    

    And I would recommend having a .tools-versions file with:

    kubectl 1.21.9
    
    Login or Signup to reply.
  8. There is a glitch with the very latest version of kubectl.
    For now, you can follow these steps to get rid of the issue:

    1. curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
    2. chmod +x ./kubectl
    3. sudo mv ./kubectl /usr/local/bin/kubectl
    4. sudo kubectl version
    Login or Signup to reply.
  9. Try updating your awscli (AWS Command Line Interface) version.

    For Mac, it’s brew upgrade awscli (Homebrew).

    Login or Signup to reply.
  10. Try upgrading the AWS Command Line Interface:

    Steps

    1. curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
    2. sudo installer -pkg ./AWSCLIV2.pkg -target

    You can use other ways from the AWS documentation: Installing or updating the latest version of the AWS CLI

    Login or Signup to reply.
  11. In my case, changing apiVersion to v1beta1 in the kube configuration file helped:

    apiVersion: client.authentication.k8s.io/v1beta1
    
    Login or Signup to reply.
  12. I was able to fix this by running on a MacBook Pro M1 chip (Homebrew):

    brew upgrade awscli
    
    Login or Signup to reply.
  13. fixed for me only change in kubeconfig
    — >v1alpha1 to v1beta1

    Login or Signup to reply.
  14. I changed the alpha1 value to the beta1 value, and it’s working for me under the configuration file.

    Login or Signup to reply.
    1. Open ~/.kube/config
    2. Search for the user within the cluster you have a problem with and replace the client.authentication.k8s.io/v1alpha1 with client.authentication.k8s.io/v1beta1
    Login or Signup to reply.
  15. You can run the below command on your host machine where kubectl and aws-cli exist:

    export KUBERNETES_EXEC_INFO='{"apiVersion":"client.authentication.k8s.io/v1beta1"}'

    If using ‘sudo’ while running kubectl commands, then export this as root user.

    Login or Signup to reply.
  16. apt install python3-pip -y
    pip3 install awscli --upgrade --user
    
    Login or Signup to reply.
  17. The simplest solution: (it appears here but in complicated words..)

    Open your kube config file and replace all alpha instances with beta.
    (Editors with find&replace are recommended: Atom, Sublime, etc..).

    Example with Nano:

    nano  ~/.kube/config
    

    Or with Atom:

    atom ~/.kube/config
    

    Then you should search for the alpha instances and replace them with beta and save the file.

    Login or Signup to reply.
  18. I got the same problem:
    EKS version 1.22
    kubectl works, and its version: v1.22.15-eks-fb459a0
    helm version is 3.9+, when I execute helm ls -n $namespace I got the error

    Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
    

    from here: it is helm version issue.
    so I use the command

    curl -L https://git.io/get_helm.sh | bash -s -- --version v3.8.2

    downgraded the helm version. helm works

    Login or Signup to reply.
  19. try diffrent version of kubectl ,
    if kubernetes version is a 1.23 then we can use (one near) kubectl version 1.23,1.24,1.22

    Login or Signup to reply.
  20. I was facing the same issue for solution, please follow the below setups:

    1. take backup existing config file mv ~/.kube/config ~/.kube/config.bk

    2. run below command:

    aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
    
    1. then open the config ~/.kube/config file in any text editor, update v1apiVersion1 to v1beta1 and then try again.
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search