I’m trying to understand how nodeSelector and taints work together in Kubernetes. Specifically, I want to know if using nodeSelector in a pod specification can override the taints on a node, allowing the pod to be scheduled on a tainted node. I’m not able to find anything on the official docs about this
Here’s an example to illustrate my setup:
Node Configuration:
I have a node with a taint applied:
kubectl taint nodes example-node key1=value1:NoSchedule
kubectl label nodes example-node environment=production
Pod Configuration:
And I have a pod configuration with a nodeSelector:
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
environment: production
2
Answers
A node selector impacts a single pod template, instructing the scheduler to place it on specific nodes. Conversely, a NoSchedule taint influences all pods, directing the scheduler to prevent them from being scheduled on those nodes.
A node selector is beneficial when the pod requires certain resources from the node, such as a GPU. In contrast, a node taint is advantageous when the node needs to be reserved for particular workloads, ensuring that only pods utilizing specific resources, like a GPU, are scheduled on that node.
Sometimes, using both a node selector and a taint together is advantageous. For instance, if you want only GPU-using pods on a node and need the pod requiring a GPU to be scheduled on a GPU node, you could taint the node with dedicated=gpu:NoSchedule and include both a taint toleration and a node selector in the pod template.
Finally if the intersection of node selector and taint is empty, the pod will not be scheduled. There is no precedence, it is the intersection of all criterion.
You have to split the concept of nodeSelector and taints
Let’s think about the meaning
Taint
Tolerate
scenarios:
Result: The pod cannot be scheduled.
Reason: The pod cannot ‘tolerate’ the node’s ‘taint’.
Result: The pod will be scheduled on one of the nodes you selected with the nodeSelector, tolerating the taint.
Reason: The pod can ‘tolerate’ the taint and also meets the nodeSelector condition.
Result: The pod can be scheduled on the tainted node.
Reason: The pod can ‘tolerate’ the node’s taint and also satisfies the nodeSelector condition.
And also, read about the kube scheduler
https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
Anyway, pod can be scheduled if all condition (filtering, scoring) is satisfied