skip to Main Content

I built a simple setup of Kubernetes on bare metal. With 1 master and 2 worker nodes:

[root@kubemaster helm-chart]$ kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
kubemaster   Ready    control-plane   53d   v1.26.1
kubenode-1   Ready    <none>          53d   v1.26.1
kubenode-2   Ready    <none>          17d   v1.26.2

I installed a simple echo server:

[root@kubemaster helm-chart]$ kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
echo-67456bbd77-ttgx7   1/1     Running   0          50m   X.X.X.X   kubenode-2   <none>           <none>

I’ve also installed Nginx ingress controller with 2 replicas running on both worker nodes:

[root@kubemaster helm-chart]$ kubectl get pods -o wide -n nginx
NAME                          READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
bkk-ingress-5c56c5868-lhd98   1/1     Running   0          19m   Y.Y.Y.Y   kubenode-1   <none>           <none>
bkk-ingress-5c56c5868-xj8jh   1/1     Running   0          60m   X.X.X.X    kubenode-2   <none>           <none>

And I added this ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo
spec:
  ingressClassName: nginx
  rules:
  - host: kong.example
    http:
      paths:
      - path: /echo
        pathType: ImplementationSpecific
        backend:
          service:
            name: echo
            port:
              number: 80

Here is the echo service:

kind: Service
apiVersion: v1
metadata:
  name: echo
  namespace: default
spec:
  type: ClusterIP
  ports:
    - name: low
      protocol: TCP
      port: 80
      targetPort: 8080
  selector:
    app: echo

When I test this scenario with calling Nginx controller on KUBENODE_2, where echo app is running as well:

curl -i http://KUBENODE_2_IP:MY_PORT/echo -H 'Host: kong.example

everything works fine, just as I expected. But if I replace KUBENODE_2_IP with KUBENODE_1_IP, the call results in a timeout. (Ingress controller pod runs on that node as well.) Does anybody know what else shall I configure to make it working?

Both boxes have the MY_PORT opened on them.

Everything is running on CentOS 8 Linux.

If you need any more config to answer this question, please let me know, I can provide everything.

UPDATE:

As requested in comments…

[root@kubemaster helm-chart]$ kubectl get svc --all-namespaces
NAMESPACE              NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                        AGE
calico-apiserver       calico-api                        ClusterIP   10.102.232.124   <none>        443/TCP                        3d19h
calico-system          calico-kube-controllers-metrics   ClusterIP   None             <none>        9094/TCP                       3d19h
calico-system          calico-typha                      ClusterIP   10.107.28.169    <none>        5473/TCP                       3d19h
default                kubernetes                        ClusterIP   10.96.0.1        <none>        443/TCP                        3d20h
devops-tools           jenkins-service                   NodePort    10.102.169.34    <none>        8080:32000/TCP                 2d20h
echo                   echo                              ClusterIP   10.103.180.199   <none>        8080/TCP                         3d10h
kube-system            kube-dns                          ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP         3d20h
kubernetes-dashboard   dashboard-metrics-scraper         ClusterIP   10.97.51.241     <none>        8000/TCP                       3d8h
kubernetes-dashboard   kubernetes-dashboard              NodePort    10.102.144.46    <none>        443:32321/TCP                  3d8h
nginx                  bkk-nginx-ingress                 NodePort    10.106.141.233   <none>        801:31902/TCP,4431:31903/TCP   2m30s

Here is the echo pod:

[root@kubemaster helm-chart]$ kubectl get pod echo-6b68cbf67d-xspjh -n echo -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: 06aa62993255edd5f3c9c4e6633c21f20d5ead2f2b7836bf69d6c8b12b4b6b26
    cni.projectcalico.org/podIP: 192.168.29.131/32
    cni.projectcalico.org/podIPs: 192.168.29.131/32
  creationTimestamp: "2023-03-30T09:17:42Z"
  generateName: echo-6b68cbf67d-
  labels:
    app: echo
    pod-template-hash: 6b68cbf67d
  name: echo-6b68cbf67d-xspjh
  namespace: echo
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: echo-6b68cbf67d
    uid: 74c82a2f-40ca-4b84-aa15-13f437c9c1fe
  resourceVersion: "71410"
  uid: f6a859b1-ae4b-4ae6-b7b5-cc19cb752b22
spec:
  containers:
  - image: jmalloc/echo-server
    imagePullPolicy: Always
    name: echo-server
    ports:
    - containerPort: 8080
      name: http-port
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-rhx5f
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: kubenode-1
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-rhx5f
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T09:17:42Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T09:17:46Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T09:17:46Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T09:17:42Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://d6a2d64e322d50a9dc5d4dcec015eb928d6b2a859a73291c2d4e122756503bda
    image: docker.io/jmalloc/echo-server:latest
    imageID: docker.io/jmalloc/echo-server@sha256:57110914108448e6692cd28fc602332357f91951d74ca12217a347b1f7df599c
    lastState: {}
    name: echo-server
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2023-03-30T09:17:46Z"
  hostIP: XXX.XXX.XXX.XXX
  phase: Running
  podIP: 192.168.29.131
  podIPs:
  - ip: 192.168.29.131
  qosClass: BestEffort
  startTime: "2023-03-30T09:17:42Z"

Update for @AagonP:

NGINX Ingress Controller Version=3.0.2 Commit=40e3a2bc24a0b158938c1e1c5e5f8db5feec624e Date=2023-02-14T11:27:05Z DirtyState=false Arch=linux/amd64 Go=go1.19.6
I0405 21:29:29.728053       1 flags.go:294] Starting with flags: ["-nginx-plus=false" "-nginx-reload-timeout=60000" "-enable-app-protect=false" "-enable-app-protect-dos=false" "-nginx-configmaps=nginx/bkk-ingress-config" "-default-server-tls-secret=nginx/bkk-nginx-ingress-default-server-tls" "-ingress-class=nginx" "-health-status=false" "-health-status-uri=/nginx-health" "-nginx-debug=false" "-v=1" "-nginx-status=true" "-nginx-status-port=8080" "-nginx-status-allow-cidrs=127.0.0.1" "-report-ingress-status" "-enable-leader-election=true" "-leader-election-lock-name=bkk-nginx-ingress-leader-election" "-enable-prometheus-metrics=true" "-prometheus-metrics-listen-port=9113" "-prometheus-tls-secret=" "-enable-service-insight=false" "-service-insight-listen-port=9114" "-service-insight-tls-secret=" "-enable-custom-resources=true" "-enable-snippets=false" "-include-year=false" "-disable-ipv6=false" "-enable-tls-passthrough=false" "-enable-preview-policies=false" "-enable-cert-manager=false" "-enable-oidc=false" "-enable-external-dns=false" "-ready-status=true" "-ready-status-port=8081" "-enable-latency-metrics=false"]
I0405 21:29:29.784607       1 main.go:227] Kubernetes version: 1.26.3
I0405 21:29:29.811361       1 main.go:373] Using nginx version: nginx/1.23.3
2023/04/05 21:29:29 [notice] 13#13: using the "epoll" event method
2023/04/05 21:29:29 [notice] 13#13: nginx/1.23.3
2023/04/05 21:29:29 [notice] 13#13: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/04/05 21:29:29 [notice] 13#13: OS: Linux 4.18.0-348.2.1.el8_5.x86_64
2023/04/05 21:29:29 [notice] 13#13: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/04/05 21:29:29 [notice] 13#13: start worker processes
2023/04/05 21:29:29 [notice] 13#13: start worker process 14
2023/04/05 21:29:29 [notice] 13#13: start worker process 15
2023/04/05 21:29:29 [notice] 13#13: start worker process 16
2023/04/05 21:29:29 [notice] 13#13: start worker process 17
I0405 21:29:29.876948       1 listener.go:54] Starting Prometheus listener on: :9113/metrics
I0405 21:29:29.877797       1 leaderelection.go:248] attempting to acquire leader lease nginx/bkk-nginx-ingress-leader-election...
W0405 21:29:29.977996       1 controller.go:3877] Using the DEPRECATED annotation 'kubernetes.io/ingress.class'. The 'ingressClassName' field will be ignored.
I0405 21:29:29.979566       1 event.go:285] Event(v1.ObjectReference{Kind:"Secret", Namespace:"nginx", Name:"bkk-nginx-ingress-default-server-tls", UID:"45245406-83f4-46f7-9504-98e63e954686", APIVersion:"v1", ResourceVersion:"1224178", FieldPath:""}): type: 'Normal' reason: 'Updated' the special Secret nginx/bkk-nginx-ingress-default-server-tls was updated
I0405 21:29:29.979604       1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"echoserver", Name:"echoserver", UID:"90866d19-63b4-49a9-9097-663d70c6327a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1221186", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for echoserver/echoserver was added or updated 
I0405 21:29:29.979622       1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"echoserver", Name:"echoserver", UID:"90866d19-63b4-49a9-9097-663d70c6327a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1221186", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for echoserver/echoserver was added or updated 
2023/04/05 21:29:29 [notice] 13#13: signal 1 (SIGHUP) received from 21, reconfiguring
2023/04/05 21:29:29 [notice] 13#13: reconfiguring
2023/04/05 21:29:29 [notice] 13#13: using the "epoll" event method
2023/04/05 21:29:29 [notice] 13#13: start worker processes
2023/04/05 21:29:29 [notice] 13#13: start worker process 22
2023/04/05 21:29:29 [notice] 13#13: start worker process 23
2023/04/05 21:29:29 [notice] 13#13: start worker process 24
2023/04/05 21:29:29 [notice] 13#13: start worker process 25
2023/04/05 21:29:30 [notice] 14#14: gracefully shutting down
2023/04/05 21:29:30 [notice] 16#16: gracefully shutting down
2023/04/05 21:29:30 [notice] 16#16: exiting
2023/04/05 21:29:30 [notice] 14#14: exiting
2023/04/05 21:29:30 [notice] 14#14: exit
2023/04/05 21:29:30 [notice] 16#16: exit
2023/04/05 21:29:30 [notice] 15#15: gracefully shutting down
2023/04/05 21:29:30 [notice] 15#15: exiting
2023/04/05 21:29:30 [notice] 15#15: exit
2023/04/05 21:29:30 [notice] 17#17: gracefully shutting down
2023/04/05 21:29:30 [notice] 17#17: exiting
2023/04/05 21:29:30 [notice] 17#17: exit
I0405 21:29:30.101165       1 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"nginx", Name:"bkk-ingress-config", UID:"2ccfee01-9782-4c26-97c1-36acffda896f", APIVersion:"v1", ResourceVersion:"1224180", FieldPath:""}): type: 'Normal' reason: 'Updated' Configuration from nginx/bkk-ingress-config was updated 
I0405 21:29:30.101198       1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"echoserver", Name:"echoserver", UID:"90866d19-63b4-49a9-9097-663d70c6327a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1221186", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for echoserver/echoserver was added or updated 
2023/04/05 21:29:30 [notice] 13#13: signal 17 (SIGCHLD) received from 14
2023/04/05 21:29:30 [notice] 13#13: worker process 14 exited with code 0
2023/04/05 21:29:30 [notice] 13#13: signal 29 (SIGIO) received
2023/04/05 21:29:30 [notice] 13#13: signal 17 (SIGCHLD) received from 15
2023/04/05 21:29:30 [notice] 13#13: worker process 15 exited with code 0
2023/04/05 21:29:30 [notice] 13#13: signal 29 (SIGIO) received
2023/04/05 21:29:30 [notice] 13#13: signal 17 (SIGCHLD) received from 16
2023/04/05 21:29:30 [notice] 13#13: worker process 16 exited with code 0
2023/04/05 21:29:30 [notice] 13#13: signal 29 (SIGIO) received
2023/04/05 21:29:30 [notice] 13#13: signal 17 (SIGCHLD) received from 17
2023/04/05 21:29:30 [notice] 13#13: worker process 17 exited with code 0
2023/04/05 21:29:30 [notice] 13#13: signal 29 (SIGIO) received
I0407 08:39:21.655133       1 leaderelection.go:258] successfully acquired lease nginx/bkk-nginx-ingress-leader-election

After this only this segment is repeated numerous times. (I tried it multiple times…)

2023/04/08 09:19:05 [error] 22#22: *24 upstream timed out (110: Connection timed out) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: XXX.XXX.XXX, request: "GET / HTTP/1.1", upstream: "http://192.168.29.143:80/", host: "XXX.XXX.XXX"
XXX.XXX.XXX.XXX - - [08/Apr/2023:09:19:05 +0000] "GET / HTTP/1.1" 504 167 "-" "curl/7.87.0" "-"
XXX.XXX.XXX.XXX - - [08/Apr/2023:10:42:01 +0000] "GET / HTTP/1.1" 499 0 "-" "curl/7.87.0" "-"
2023/04/13 14:20:57 [error] 22#22: *28 upstream timed out (110: Connection timed out) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: XXX.XXX.XXX, request: "GET / HTTP/1.1", upstream: "http://192.168.29.143:80/", host: "XXX.XXX.XXX.XXX"
XXX.XXX.XXX.XXX - - [13/Apr/2023:14:20:57 +0000] "GET / HTTP/1.1" 504 167 "-" "curl/7.87.0" "-"
[root@kubemaster ~]$ kubectl appkubectl describe ingress echoserver -n echoserver
Name:             echoserver
Labels:           <none>
Namespace:        echoserver
Address:
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  apps.besztercekk.hu
                       /   echoserver:80 (192.168.29.143:80)
Annotations:           kubernetes.io/ingress.class: nginx
Events:
  Type    Reason          Age               From                      Message
  ----    ------          ----              ----                      -------
  Normal  AddedOrUpdated  15s (x4 over 8d)  nginx-ingress-controller  Configuration for echoserver/echoserver was added or updated
  Normal  AddedOrUpdated  15s (x4 over 8d)  nginx-ingress-controller  Configuration for echoserver/echoserver was added or updated
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echoserver
  namespace: echoserver
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  ingressClassName: nginx
  rules:
  - host: apps.besztercekk.hu
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echoserver
            port:
              number: 80
[root@kubemaster ~]$ kubectl describe ingressClass nginx
Name:         nginx
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: bkk
              meta.helm.sh/release-namespace: nginx
Controller:   nginx.org/ingress-controller
Events:       <none>
[root@kubemaster ~]$ kubectl describe pod bkk-ingress-8mkf4 -n nginx
Name:             bkk-ingress-8mkf4
Namespace:        nginx
Priority:         0
Service Account:  bkk-ingress-sa
Node:             kubenode-2/XXX.XXX.XXX.XXX
Start Time:       Wed, 05 Apr 2023 23:29:28 +0200
Labels:           app=bkk-ingress
                  controller-revision-hash=5c56c5868
                  pod-template-generation=2
Annotations:      cni.projectcalico.org/containerID: 99b8a649d5edbbe68d3077ff1a688693495d7eb5ad33a4678b8274eb644ac6a2
                  cni.projectcalico.org/podIP: 192.168.77.82/32
                  cni.projectcalico.org/podIPs: 192.168.77.82/32
                  prometheus.io/port: 9113
                  prometheus.io/scheme: http
                  prometheus.io/scrape: true
Status:           Running
IP:               192.168.77.82
IPs:
  IP:           192.168.77.82
Controlled By:  DaemonSet/bkk-ingress
Containers:
  bkk-nginx-ingress:
    Container ID:  containerd://bab4abc4cc5120c24cbec492397667932d934a73ffa82b08710242a7ef90f0d0
    Image:         nginx/nginx-ingress:3.0.2
    Image ID:      docker.io/nginx/nginx-ingress@sha256:218eec3226b3a130b18090f3f7f244874dd082536e3de53468d0fbd5d357e039
    Ports:         80/TCP, 443/TCP, 9113/TCP, 8081/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      -nginx-plus=false
      -nginx-reload-timeout=60000
      -enable-app-protect=false
      -enable-app-protect-dos=false
      -nginx-configmaps=$(POD_NAMESPACE)/bkk-ingress-config
      -default-server-tls-secret=$(POD_NAMESPACE)/bkk-nginx-ingress-default-server-tls
      -ingress-class=nginx
      -health-status=false
      -health-status-uri=/nginx-health
      -nginx-debug=false
      -v=1
      -nginx-status=true
      -nginx-status-port=8080
      -nginx-status-allow-cidrs=127.0.0.1
      -report-ingress-status
      -enable-leader-election=true
      -leader-election-lock-name=bkk-nginx-ingress-leader-election
      -enable-prometheus-metrics=true
      -prometheus-metrics-listen-port=9113
      -prometheus-tls-secret=
      -enable-service-insight=false
      -service-insight-listen-port=9114
      -service-insight-tls-secret=
      -enable-custom-resources=true
      -enable-snippets=false
      -include-year=false
      -disable-ipv6=false
      -enable-tls-passthrough=false
      -enable-preview-policies=false
      -enable-cert-manager=false
      -enable-oidc=false
      -enable-external-dns=false
      -ready-status=true
      -ready-status-port=8081
      -enable-latency-metrics=false
    State:          Running
      Started:      Wed, 05 Apr 2023 23:29:29 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s #success=1 #failure=3
    Environment:
      POD_NAMESPACE:  nginx (v1:metadata.namespace)
      POD_NAME:       bkk-ingress-8mkf4 (v1:metadata.name)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q2mpq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-q2mpq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>

2

Answers


  1. On the KUBENODE_1_IP node you don’t have any active "Echo" application.

    Try changing the replication of "Echo" to 2, make sure the second pod goes to the KUBENODE_1_IP node and you will see that it will work for you.

    Either way, you’re bypassing the Service; if you point to the Service, then replacing the IP of the Node with the full DNS name of the Service, you would always reach your application.
    Example:
    SERVICE-NAME.NAMESPACE-NAME.svc.cluster.local

    Here are some useful links:

    https://kubernetes.io/docs/concepts/services-networking/service/

    https://medium.com/the-programmer/working-with-clusterip-service-type-in-kubernetes-45f2c01a89c8

    https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/

    Login or Signup to reply.
  2. Did you forget to install kube-proxy, the Kubernetes network proxy running on each node?

    Without it, your Nginx ingress controller running on KUBENODE_1 will have no way to redirect your request to the echo pod.

    Refer to Install the kube-proxy addon with kubeadm

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search