skip to Main Content

I have set up a kubernetes cluster based on three VMs Centos 8 and I deployed a pod with nginx.

IP addresses of the VMs:

kubemaster 192.168.56.20
kubenode1 192.168.56.21
kubenode2 192.168.56.22

On each VM the interfaces and routes are defined as following:

ip addr:
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:d2:1b:97 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3
       valid_lft 75806sec preferred_lft 75806sec
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:df:77:05 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.22/24 brd 192.168.56.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ff:47:9a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:ff:47:9a brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:19:52:19:b1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 22:b8:b4:5a:5a:26 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.0/32 brd 10.244.2.0 scope global flannel.1
       valid_lft forever preferred_lft forever

ip route:
default via 10.0.2.2 dev enp0s3 proto dhcp metric 100
default via 192.168.56.1 dev enp0s8 proto static metric 101
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.22 metric 101
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

On each VM I have two network adapters, one NAT for internet access (enp0s3) and one Host only Network for the 3 VMs to communicate (enp0s8) with each other (it is ok I tested it with ping command).

On each VM I applied the following firewall rules:

firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API server
firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
firewall-cmd --permanent --add-port=8285/udp # Flannel
firewall-cmd --permanent --add-port=8472/udp # Flannel
firewall-cmd --add-masquerade –permanent
firewall-cmd --reload

finally I deployed the cluster and nginx with the following commands:

sudo kubeadm init --apiserver-advertise-address=192.168.56.20 --pod-network-cidr=10.244.0.0/16 (for Flannel CNI)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80

More general information of my cluster:

kubectl get nodes -o wide

NAME         STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                 CONTAINER-RUNTIME
kubemaster   Ready    master   3h8m   v1.19.2   192.168.56.20   <none>        CentOS Linux 8 (Core)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13
kubenode1    Ready    <none>   3h6m   v1.19.2   192.168.56.21   <none>        CentOS Linux 8 (Core)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13
kubenode2    Ready    <none>   165m   v1.19.2   192.168.56.22   <none>        CentOS Linux 8 (Core)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13

kubectl get pods –all-namespaces -o wide

NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES
default       nginx-6799fc88d8-mrvsg               1/1     Running   0          3h     10.244.1.3      kubenode1    <none>           <none>
kube-system   coredns-f9fd979d6-6qxk9              1/1     Running   0          3h9m   10.244.1.2      kubenode1    <none>           <none>
kube-system   coredns-f9fd979d6-bj2fd              1/1     Running   0          3h9m   10.244.0.2      kubemaster   <none>           <none>
kube-system   etcd-kubemaster                      1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-apiserver-kubemaster            1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-controller-manager-kubemaster   1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-flannel-ds-fdv4p                1/1     Running   0          166m   192.168.56.22   kubenode2    <none>           <none>
kube-system   kube-flannel-ds-vvhsz                1/1     Running   0          3h6m   192.168.56.21   kubenode1    <none>           <none>
kube-system   kube-flannel-ds-vznl5                1/1     Running   0          3h6m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-proxy-45tmz                     1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-proxy-nb7jt                     1/1     Running   0          3h7m   192.168.56.21   kubenode1    <none>           <none>
kube-system   kube-proxy-tl9n5                     1/1     Running   0          166m   192.168.56.22   kubenode2    <none>           <none>
kube-system   kube-scheduler-kubemaster            1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>

kubectl get service -o wide

kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3h10m   <none>
nginx        NodePort    10.102.152.25   <none>        80:30086/TCP   179m    app=nginx

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:32:58Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

iptables version:

iptables v1.8.4 (nf_tables)

Results and issue:

  • If I do curl 192.168.56.21:30086 from any VM -> OK I get the nginx code.
  • If I try other IPs (e.g., curl 192.168.56.22:30086), it fails… (curl: (7) Failed to connect to 192.168.56.22 port 30086: Connection time out)

What I tried to debug:

sudo netstat -antup | grep kube-proxy
o   tcp        0      0 0.0.0.0:30086           0.0.0.0:*               LISTEN      4116/kube-proxy
o   tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      4116/kube-proxy
o   tcp        0      0 192.168.56.20:49812     192.168.56.20:6443      ESTABLISHED 4116/kube-proxy
o   tcp6       0      0 :::10256                :::*                    LISTEN      4116/kube-proxy

Thus on each VM it seems the kube-proxy listens on port 30086 which is OK.

I tried to apply this rule on each node (found on another ticket) without any success:

iptables -A FORWARD -j ACCEPT

Do you have any idea why I can’t reach the service from master node and node 2?

First update:

  • It seems Centos 8 is not compatible with kubeadm. I changed for Centos 7 but still have the issue;
  • The flannel pods created are using the wrong interface (enp0s3) instead of enp0s8. I modified the kube-flannel.yaml file and added the argument (–iface=enp0s8). Now my pods are using the correct interface.
kubectl logs kube-flannel-ds-nn6v4 -n kube-system:
I0929 06:19:36.842149       1 main.go:531] Using interface with name enp0s8 and address 192.168.56.22
I0929 06:19:36.842243       1 main.go:548] Defaulting external address to interface address (192.168.56.22)

Even by fixing these two things I still have the same issue…

Second update:

The final solution was to flush iptables on each VM with the following commands:

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

Now it is working correctly 🙂

2

Answers


  1. Chosen as BEST ANSWER

    I finally found the solution after having switched to Centos 7 and correct Flannel configuration (see other comments). Actually, I noticed some issues in the pods where coredns is running. Here is an example of what happens inside one of these pods:

    kubectl logs coredns-f9fd979d6-8gtlp -n kube-system:
    E0929 07:09:40.200413       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
    [INFO] plugin/ready: Still waiting on: "kubernetes"
    

    The final solution was to flush iptables on each VM with the following commands:

    systemctl stop kubelet
    systemctl stop docker
    iptables --flush
    iptables -tnat --flush
    systemctl start kubelet
    systemctl start docker
    

    Then I can access the service deployed from each VM :)

    I am still not sure to understand clearly what was the issue. Here is some information:

    I will keep investigating and post more information here.


  2. This is because you are running k8s on CentOS 8.

    According to kubernetes documentation the list of supported host operating systems is as follows:

    • Ubuntu 16.04+
    • Debian 9+
    • CentOS 7
    • Red Hat Enterprise Linux (RHEL) 7
    • Fedora 25+
    • HypriotOS v1.0.1+
    • Flatcar Container Linux (tested with 2512.3.0)

    This article mentioned that there are network issues on RHEL 8:

    (2020/02/11 Update: After installation, I keep facing pod network issue which is like deployed pod is unable to reach external network or pods deployed in different workers are unable to ping each other even I can see all nodes (master, worker1 and worker2) are ready via kubectl get nodes. After checking through the Kubernetes.io official website, I observed the nfstables backend is not compatible with the current kubeadm packages. Please refer the following link in “Ensure iptables tooling does not use the nfstables backend”.

    The simplest solution here is to reinstall the nodes on supported operating system.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search