skip to Main Content

Upon submitting few jobs (say, 50) targeted on a single node, I am getting pod status as "OutOfpods" for few jobs. I have reduced the maximum number of pods on this worker node to "10", but still observe above issue.
Kubelet configuration is default with no changes.

kubernetes version: v1.22.1

  • Worker Node

Os: CentOs 7.9
memory: 528 GB
CPU: 40 cores

kubectl describe pod :

Warning OutOfpods 72s kubelet Node didn’t have enough resource: pods, requested: 1, used: 10, capacity: 10

2

Answers


  1. Chosen as BEST ANSWER

    I have realized this to be a known issue for kubelet v1.22 as confirmed here. The fix will be reflected in the next latest release.

    Simple resolution here is to downgrade kubernetes to v1.21.


  2. I’m seeing this problem as well w/ K8s v1.22. I’m scheduling around 100 containers w/ one node with an extended resource called "executors" and a capacity of 300 per node. Each container requests 10. The pods stay pending for a long time, but as soon as they are assigned by the scheduler, kubelet on the node says its out of resource. Its just a warning I suppose, but it actually leads to "failed" status on the pod atleast briefly. I have to check whether its re-created as pending or not.

      Normal   Scheduled               40m   default-scheduler  Successfully assigned ci-smoke/userbench-4a306d7-l1all-8zv7n-3803535768 to sb-bld-srv-39
      Warning  OutOfwdc.com/executors  40m   kubelet            Node didn't have enough resource:wdc.com/executors, requested: 10, used: 300, capacity: 300```
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search