skip to Main Content

i have a web application running inside cluster ip on worker node on port 5001,i’m also using k3s for cluster deployment, i checked the cluster connection it’s running fine

enter image description here

the deployment has the container port set to 5001:

ports:
  - containerPort:5001

Here is the service file:

apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: user-ms
  name: user-ms
spec:
  ports:
    - name: http
      port: 80
      targetPort: 5001
  selector:
    io.kompose.service: user-ms
status:
  loadBalancer: {}

and here is the ingress file:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
   name: user-ms-ingress
spec:
   rules:
   - http:
      paths:
      - path: / 
        pathType: Prefix
        backend:
           service:
             name: user-ms
             port: 
               number: 80

i’m getting 502 Bad Gateway error whenever i type in my worker or master ip address

expected: it should return the web application page

i looked online and most of them mention wrong port for service and ingress, but my ports are correct yes i triple check it:

try calling user-ms service on port 80 from another pod -> worked try
calling cluster ip on worker node on port 5001 -> worked

the ports are running correct, why is the ingress returning 502?

here is the ingress describe:

enter image description here

and here is the describe of nginx ingress controller pod:

enter image description here
enter image description here

the nginx ingress pod running normally:

enter image description here

here is the logs of the nginx ingress pod:

enter image description here

sorry for the images, but i’m using a streaming machine to access the terminal so i can’t copy paste

How should i go with debugging this error?

2

Answers


  1. Chosen as BEST ANSWER

    ok i managed to figure out this, in the default setting of K3S it uses traefik as it default ingress, so that why my nginx ingress log doesn't show anything from 502 Bad GateWay

    I decided to tear down my cluster and set it up again, now with suggestion from this issue https://github.com/k3s-io/k3s/issues/1160#issuecomment-1058846505 to create cluster without traefik:

    curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
    

    now when i call kubectl get pods --all-namespaces i no longer see traefik pod running, previously it had traefik pods runining.

    once i done all of it, run apply on ingress once again -> get 404 error, i checked in the nginx ingress pod logs now it's showing new error of missing Ingress class, i add the following to my ingress configuration file under metadata:

    metadata:
      name: user-ms-ingress
      annotitations:
        kubernetes.io/ingress.class: "nginx"
    

    now i once more go to the ip of the worker node -> 404 error gone but got 502 bad gateway error, i checked the logs get connection refused errors:

    enter image description here

    i figured out that i was setting a network policy for all of my micro services, i delete the network policy and remove it's setting from all my deployment files.

    Finally check once more and i can see that i can access my api and swagger page normally.

    TLDR:

    1. If you are using nginx ingress on K3S, remember to disable traefik first when created the cluster
    2. don't forget to set ingress class inside your ingress configuration
    3. don't setup network policy because nginx pod won't be able to call the other pods in that network

  2. You can turn on access logging on nginx, which will enable you to see more logs on ingress-controller and also trace every requests routing through ingress, if you are trying to load UI/etc, it will show you that the requests are coming in from browser or if you accessing a particular endpoint, the calls will be visible on the nginx-controller logs. You can conclude, if the requests coming in are actually routing to the proper service using this and then start debugging the service (ex: check to see if you can curl the endpoint from any pod within the cluster etc)

    Noticed that you are using the image(k8s.gcr.io/ingress-nginx/controller:v1.2.0), if you have installed using helm, there must be a kubernetes-ingress configmap with ingress controller, by default "disable-access-log" will be true, change it false and you should start seeing more logs on ingress-controller, you might want to bounce ingress controller pods if you do not see detailed logs.

    Kubectl edit cm -n namespace kubernetes-ingress

    apiVersion: v1
    data:
      disable-access-log: "false" #turn this to false
      map-hash-bucket-size: "128"
      ssl-protocols: SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
    kind: ConfigMap
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search