skip to Main Content

I’m trying to install Kubernetes cluster using this tutorial:

https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/

But when I install the master place and run: kubectl get pods -n kube-system I get:

kubernetes@kubernetes1:~$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-555bc4b957-kv6zz   0/1     Pending   0          5m38s
calico-node-kzfqn                          1/1     Running   0          5m38s
coredns-6d4b75cb6d-lwdgx                   1/1     Running   0          6m44s
coredns-6d4b75cb6d-mrkqj                   1/1     Running   0          6m45s
etcd-kubernetes1                           1/1     Running   0          6m50s
kube-apiserver-kubernetes1                 1/1     Running   0          6m50s
kube-controller-manager-kubernetes1        1/1     Running   0          6m52s
kube-proxy-hqgxj                           1/1     Running   0          6m45s
kube-scheduler-kubernetes1                 1/1     Running   0          6m50s

events:

kubernetes@kubernetes1:~$ kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT             MESSAGE
7m17s       Normal    NodeHasSufficientMemory   node/kubernetes1   Node kubernetes1 status is now: NodeHasSufficientMemory
7m17s       Normal    NodeHasNoDiskPressure     node/kubernetes1   Node kubernetes1 status is now: NodeHasNoDiskPressure
7m17s       Normal    NodeHasSufficientPID      node/kubernetes1   Node kubernetes1 status is now: NodeHasSufficientPID
7m7s        Normal    Starting                  node/kubernetes1   Starting kubelet.
7m7s        Warning   InvalidDiskCapacity       node/kubernetes1   invalid capacity 0 on image filesystem
7m7s        Normal    NodeAllocatableEnforced   node/kubernetes1   Updated Node Allocatable limit across pods
7m7s        Normal    NodeHasSufficientMemory   node/kubernetes1   Node kubernetes1 status is now: NodeHasSufficientMemory
7m7s        Normal    NodeHasNoDiskPressure     node/kubernetes1   Node kubernetes1 status is now: NodeHasNoDiskPressure
7m7s        Normal    NodeHasSufficientPID      node/kubernetes1   Node kubernetes1 status is now: NodeHasSufficientPID
7m4s        Normal    RegisteredNode            node/kubernetes1   Node kubernetes1 event: Registered Node kubernetes1 in Controller
6m58s       Normal    Starting                  node/kubernetes1
5m15s       Normal    NodeReady                 node/kubernetes1   Node kubernetes1 status is now: NodeReady
kubernetes@kubernetes1:~$

Do you know how I can fix calico-kube-controllers-555bc4b957-kv6zz to be in Running State?

kubernetes@kubernetes1:~$ kubectl describe pod --namespace kube-system calico-kube-controllers-555bc4b957-kv6zz
Name:                 calico-kube-controllers-555bc4b957-kv6zz
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 <none>
Labels:               k8s-app=calico-kube-controllers
                      pod-template-hash=555bc4b957
Annotations:          <none>
Status:               Pending
IP:
IPs:                  <none>
Controlled By:        ReplicaSet/calico-kube-controllers-555bc4b957
Containers:
  calico-kube-controllers:
    Image:      docker.io/calico/kube-controllers:v3.23.3
    Port:       <none>
    Host Port:  <none>
    Liveness:   exec [/usr/bin/check-status -l] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:  exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      ENABLED_CONTROLLERS:  node
      DATASTORE_TYPE:       kubernetes
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2hn7 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  kube-api-access-j2hn7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  4m10s (x3 over 14m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kubernetes@kubernetes1:~$

2

Answers


  1. From the event of the pod you can clearly see that the scheduler was not able to schedule the pod onto the control-plane due to untolerated taint on the control plane.

    Think of taints and tolerations as a bug spray (taint) and a bug which has a toleration to a specific bug spray. Some bugs will tolerate bug sprays that are designed to keep off other species of bugs. In your case, the control plane is tainted with node-role.kubernetes.io/control-plane and your pod has toleration for node-role.kubernetes.io/master. In order to schedule the pod onto the control plane, ensure the pod tolerates the same taints as the target node (controlplane)

    Login or Signup to reply.
  2. You can fix it by adding a toleration to your pod spec:

    kind: Pod
    ...
    spec:
     tolerations:
      - key: "node-role.kubernetes.io/control-plane"
        operator: "Exists"
        effect: "NoSchedule"
    ...
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search