skip to Main Content

I’ve read dozens of posts with similar problems over the last two days but I couldn’t resolve this DNS issue.

Basically, pods on worker nodes can’t resolve any hostnames because they can’t connect to kube-dns address 10.96.0.10 (connection timed out).

I am providing the result of some commands I used to try debugging this issue. If anything else could help please ask in the comments and I’ll quickly add it.

Here is my setup:

  1. 3 instances of Ubuntu 22.04
  2. 1 of them is a control-plane node, the others are workers
  3. I initialized the cluster with this command: kubeadm init --control-plane-endpoint=94.250.248.250 --cri-socket=unix:///var/run/cri-dockerd.sock
  4. I use Weave as a CNI (I tried flannel before and had the same issue, so I switched to Weave to see if it would help and it didn’t)

Nodes

NAME                STATUS   ROLES           AGE   VERSION
feedgerald.com      Ready    control-plane   92m   v1.27.3
n1.feedgerald.com   Ready    <none>          90m   v1.27.3
n2.feedgerald.com   Ready    <none>          90m   v1.27.3

Pods

beluc@feedgerald:~/workspace/feedgerald/worker/kubernetes$ kubectl get po --all-namespaces -o wide
NAMESPACE     NAME                                     READY   STATUS             RESTARTS         AGE   IP               NODE                NOMINATED NODE   READINESS GATES
default       dnsutils                                 1/1     Running            0                75m   10.40.0.3        n2.feedgerald.com   <none>           <none>
default       scraper-deployment-56f5fbb68b-67cqq      0/1     Completed          21 (5m24s ago)   86m   10.32.0.3        n1.feedgerald.com   <none>           <none>
default       scraper-deployment-56f5fbb68b-hcrmj      0/1     Completed          21 (5m24s ago)   86m   10.32.0.2        n1.feedgerald.com   <none>           <none>
default       scraper-deployment-56f5fbb68b-m6ltp      0/1     CrashLoopBackOff   21 (67s ago)     86m   10.40.0.2        n2.feedgerald.com   <none>           <none>
default       scraper-deployment-56f5fbb68b-pfvlx      0/1     CrashLoopBackOff   21 (18s ago)     86m   10.40.0.1        n2.feedgerald.com   <none>           <none>
kube-system   coredns-5d78c9869d-g4zzk                 1/1     Running            0                93m   172.17.0.2       feedgerald.com      <none>           <none>
kube-system   coredns-5d78c9869d-xg5fk                 1/1     Running            0                93m   172.17.0.4       feedgerald.com      <none>           <none>
kube-system   etcd-feedgerald.com                      1/1     Running            0                93m   94.250.248.250   feedgerald.com      <none>           <none>
kube-system   kube-apiserver-feedgerald.com            1/1     Running            0                93m   94.250.248.250   feedgerald.com      <none>           <none>
kube-system   kube-controller-manager-feedgerald.com   1/1     Running            0                93m   94.250.248.250   feedgerald.com      <none>           <none>
kube-system   kube-proxy-7f4w2                         1/1     Running            0                92m   92.63.105.188    n2.feedgerald.com   <none>           <none>
kube-system   kube-proxy-jh959                         1/1     Running            0                91m   82.146.44.93     n1.feedgerald.com   <none>           <none>
kube-system   kube-proxy-jwwkt                         1/1     Running            0                93m   94.250.248.250   feedgerald.com      <none>           <none>
kube-system   kube-scheduler-feedgerald.com            1/1     Running            0                93m   94.250.248.250   feedgerald.com      <none>           <none>
kube-system   weave-net-fllvh                          2/2     Running            1 (89m ago)      89m   92.63.105.188    n2.feedgerald.com   <none>           <none>
kube-system   weave-net-kdd9p                          2/2     Running            1 (89m ago)      89m   82.146.44.93     n1.feedgerald.com   <none>           <none>
kube-system   weave-net-x5ksv                          2/2     Running            1 (89m ago)      89m   94.250.248.250   feedgerald.com      <none>           <none>

CoreDNS Logs (just in case)

beluc@feedgerald:~/workspace/feedgerald/worker/kubernetes$ kubectl logs -n kube-system coredns-5d78c9869d-g4zzk
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:43929->185.60.132.11:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:40076->82.146.59.250:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:36699->185.60.132.11:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:57545->82.146.59.250:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:36760->185.60.132.11:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:53409->188.120.247.2:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:60134->188.120.247.2:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:54812->82.146.59.250:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:44563->188.120.247.2:53: i/o timeout
[ERROR] plugin/errors: 2 2971729299988687576.7504631273068998690. HINFO: read udp 172.17.0.2:36629->188.120.247.2:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.weave.works.domains. A: read udp 172.17.0.2:35531->188.120.247.2:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.weave.works. AAAA: read udp 172.17.0.2:33150->82.146.59.250:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.weave.works. A: read udp 172.17.0.2:42371->185.60.132.11:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.weave.works. A: read udp 172.17.0.2:44653->185.60.132.11:53: i/o timeout

nslookup on one of the pods

beluc@feedgerald:~/workspace/feedgerald/worker/kubernetes$ kubectl exec -ti dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached

command terminated with exit code 1

Print of resolv.conf on that pod

beluc@feedgerald:~/workspace/feedgerald$ kubectl exec -ti dnsutils -- cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local DOMAINS
options ndots:5

This is to show that kube-dns is running

beluc@feedgerald:~/workspace/feedgerald$ kubectl get svc --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  97m
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   97m

Here is the iptables config (stackoverflow did not allow such a huge paste in the question, hence, pastebin): https://pastebin.com/raw/XTpWaeCb

2

Answers


  1. Chosen as BEST ANSWER

    This solved the issue. But still I can't understand why there was an issue in the first place.

    iptables -P INPUT ACCEPT
    iptables -P FORWARD ACCEPT
    iptables -P OUTPUT ACCEPT
    iptables -F
    

  2. I encountered a similar problem, and the following solution helped me:

    1. SSH into the node where you are experiencing the issue.

    2. Edit the files /run/systemd/resolve/resolv.conf and /etc/resolv.conf using a text editor.

    3. Replace the value of the search field with . (a dot).

    4. Save the changes and close the files.

    After making these changes restart pods and try running the DNS resolution-related commands again and check if the issue is resolved.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search