skip to Main Content

Trying to provision k8s cluster on 3 Debian 10 VMs with kubeadm.

All vms have 2 network interfaces, eth0 as public interface with static ip, eth1 as local interface with static ips in 192.168.0.0/16:

  • Master: 192.168.1.1
  • Node1: 192.168.2.1
  • Node2: 192.168.2.2

All nodes have interconnect between them.

ip a from master host:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:52:70:53:d5:12 brd ff:ff:ff:ff:ff:ff
    inet XXX.XXX.244.240/24 brd XXX.XXX.244.255 scope global dynamic eth0
       valid_lft 257951sec preferred_lft 257951sec
    inet6 2a01:367:c1f2::112/48 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::252:70ff:fe53:d512/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:95:af:b0:8c:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/16 brd 192.168.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::295:afff:feb0:8cc4/64 scope link 
       valid_lft forever preferred_lft forever

Master node is initialized fine with:

kubeadm init --upload-certs --apiserver-advertise-address=192.168.1.1 --apiserver-cert-extra-sans=192.168.1.1,XXX.XXX.244.240 --pod-network-cidr=10.40.0.0/16 -v=5

Output

But when I join worker nodes kube-api is not reachable:

kubeadm join 192.168.1.1:6443 --token 7bl0in.s6o5kyqg27utklcl --discovery-token-ca-cert-hash sha256:7829b6c7580c0c0f66aa378c9f7e12433eb2d3b67858dd3900f7174ec99cda0e -v=5

Output

Netstat from master:

# netstat -tupn | grep :6443
tcp        0      0 192.168.1.1:43332       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41774       192.168.1.1:6443        ESTABLISHED 5362/kube-proxy     
tcp        0      0 192.168.1.1:41744       192.168.1.1:6443        ESTABLISHED 5236/kubelet        
tcp        0      0 192.168.1.1:43376       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43398       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41652       192.168.1.1:6443        ESTABLISHED 4914/kube-scheduler 
tcp        0      0 192.168.1.1:43448       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43328       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43452       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43386       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43350       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41758       192.168.1.1:6443        ESTABLISHED 5182/kube-controlle 
tcp        0      0 192.168.1.1:43306       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43354       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43296       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43408       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41730       192.168.1.1:6443        ESTABLISHED 5182/kube-controlle 
tcp        0      0 192.168.1.1:41738       192.168.1.1:6443        ESTABLISHED 4914/kube-scheduler 
tcp        0      0 192.168.1.1:43444       192.168.1.1:6443        TIME_WAIT   -                   
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41730       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41744       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41738       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41652       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 ::1:6443                ::1:42862               ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41758       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 ::1:42862               ::1:6443                ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41774       ESTABLISHED 5094/kube-apiserver 

Pods from master:

# kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n kube-system -o wide
NAME                                              READY   STATUS    RESTARTS   AGE   IP                   NODE                      NOMINATED NODE   READINESS GATES
coredns-558bd4d5db-8qhhl                          0/1     Pending   0          12m   <none>               <none>                    <none>           <none>
coredns-558bd4d5db-9hj7z                          0/1     Pending   0          12m   <none>               <none>                    <none>           <none>
etcd-cloud604486.fastpipe.io                      1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-apiserver-cloud604486.fastpipe.io            1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-controller-manager-cloud604486.fastpipe.io   1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-proxy-dzd42                                  1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-scheduler-cloud604486.fastpipe.io            1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>

All vms have this kernel parameters set:

  • { name: 'vm.swappiness', value: '0' }
  • { name: 'net.bridge.bridge-nf-call-iptables', value: '1' }
  • { name: 'net.bridge.bridge-nf-call-ip6tables', value: '1'}
  • { name: 'net.ipv4.ip_forward', value: 1 }
  • { name: 'net.ipv6.conf.all.forwarding', value: 1}

br_netfilter kernel module active and iptables set to legacy mode (via alternatives)

Am I missing something?

2

Answers


  1. Chosen as BEST ANSWER

    After 1 week of tinkering issue boiled down to a service provider network miscounfiguration.

    For anyone having this same issue check the MTU of your network, in my case it defaulted to 1500 instead of recommended 1450.


  2. The reason for your issues is that the TLS connection between the components has to be secured. From the kubelet point of view this will be safe if the Api-server certificate will contain in alternative names the IP of the server that we want to connect to. You can notice yourself that you only add to SANs one IP address.

    How can you fix this? There are two ways:

    1. Use the --discovery-token-unsafe-skip-ca-verification flag with your kubeadm join command from your node.

    2. Add the IP address from the second NIC to SANs api certificate at the cluster initialization phase (kubeadm init)

    For more reading you check this directly related PR #93264 which was introduced in kubernetes 1.19.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search