skip to Main Content

I would like to have my EKS nodes being able to host new pods only after a particular daemonSet pod is up and running.
If a better way is to schedule pods only after ALL daemonsets are up, I am okay with that.

How should I approach it?

2

Answers


  1. Option : 1 – Using InitContainer

    You can use the initcontainer with the deployment to check the status of the Daemon set if not up, your Deployment container won’t get started.

    spec:
          initContainers:
          - name: wait-for-deployment
            image: busybox
            args:
            - /bin/bash
            - -c
            - >
              set -x;
              while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://service:8080/)" != "200" ]]; do 
                echo '.'
                sleep 15;
              done
          containers:
    

    You can also use the kubectl inside the init container to check the status of the Daemon set, if not responding to any response when we hit the HTTP request.

    Option : 2 – Using Jobs

    You can use the Jobs to check the status of Daemon set and once those are up & running Job will scale up the deployment PODs or run the command to create the deployment.

    Login or Signup to reply.
  2. You could use a taint on your nodes to prevent scheduling of pods except the daemonset that has to have the corresponding toleration. A container in your daemonset (sidecar or init-container) could call the API to remove the taint from the node so that from that point any pod can be scheduled on it.

    https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search