I would like to have my EKS nodes being able to host new pods only after a particular daemonSet pod is up and running.
If a better way is to schedule pods only after ALL daemonsets are up, I am okay with that.
How should I approach it?
I would like to have my EKS nodes being able to host new pods only after a particular daemonSet pod is up and running.
If a better way is to schedule pods only after ALL daemonsets are up, I am okay with that.
How should I approach it?
2
Answers
Option : 1 – Using InitContainer
You can use the initcontainer with the deployment to check the status of the Daemon set if not up, your Deployment container won’t get started.
You can also use the kubectl inside the init container to check the status of the Daemon set, if not responding to any response when we hit the HTTP request.
Option : 2 – Using Jobs
You can use the Jobs to check the status of Daemon set and once those are up & running Job will scale up the deployment PODs or run the command to create the deployment.
You could use a taint on your nodes to prevent scheduling of pods except the daemonset that has to have the corresponding toleration. A container in your daemonset (sidecar or init-container) could call the API to remove the taint from the node so that from that point any pod can be scheduled on it.
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/