Upon submitting few jobs (say, 50) targeted on a single node, I am getting pod status as "OutOfpods" for few jobs. I have reduced the maximum number of pods on this worker node to "10", but still observe above issue.
Kubelet configuration is default with no changes.
kubernetes version: v1.22.1
- Worker Node
Os: CentOs 7.9
memory: 528 GB
CPU: 40 cores
kubectl describe pod :
Warning OutOfpods 72s kubelet Node didn’t have enough resource: pods, requested: 1, used: 10, capacity: 10
2
Answers
I have realized this to be a known issue for kubelet v1.22 as confirmed here. The fix will be reflected in the next latest release.
Simple resolution here is to downgrade kubernetes to v1.21.
I’m seeing this problem as well w/ K8s v1.22. I’m scheduling around 100 containers w/ one node with an extended resource called "executors" and a capacity of 300 per node. Each container requests 10. The pods stay pending for a long time, but as soon as they are assigned by the scheduler, kubelet on the node says its out of resource. Its just a warning I suppose, but it actually leads to "failed" status on the pod atleast briefly. I have to check whether its re-created as pending or not.