skip to Main Content

We have a working Azure Kubernetes Service cluster with dotnet 6.0 web app. The pods are running on port 80 but the public url is running behind https cert which is being handled by an nginx ingress controller with cert secret. All this is working well.

We are adding some new functionality (integration from an external service). When signing into our app, with this new functionality, there’s a brief redirect to the external service page. Once the user requests have completed, the external service redirects back to our site using a preconfigured redirect url to which is posted some data (custom header and query string). At this point, our site errors with 502 bad gateway.

When i review the logs on the nginx ingress controller pod, i can see some additional errors:

[error] 664#664: *17279861 upstream prematurely closed connection while reading response header from upstream, client: 10.240.0.5, server: www-dev.application.com, request: "GET /details/c2beac1c-b220-45fa-8fd5-08da12dced76/Success?id=ID-MJCX43A4FJ032551T752200W&token=EC-0G826951TM357702S&SenderID=4FHGRLJDXPXUU HTTP/2.0", upstream: "http://10.244.1.66:80/details/c2beac1c-b220-45fa-8fd5-08da12dced76/Success?id=ID-MJCX43A4FJ032551T752200W&token=EC-0G826951TM357702S&SenderID=4FHGRLJDXPXUU", host: "www-dev.application.com", referrer: "https://www.external.service.com/"

10.244.1.66 is the internal ip of one of the application pods.

at first i thought this was an error related to annotations:

nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

because the referrer is an https:// site making the request. However adding that annotation makes the site unusuable (probably because the dotnet app pods are listening on port 80).

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: application-web
  namespace: application
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$1
    nginx.ingress.kubernetes.io/ssl-passthrough: "true" 
spec:
  tls:
  - hosts:
    - www-dev.application.com
    secretName: application-ingress-tls
  rules:
  - host: www-dev.application.com
    http:
      paths:
      - path: /(.*)
        pathType: Prefix
        backend:
          service:
            name: applicationwebsvc
            port: 
              number: 80

Here’s the application ingress yaml.

Anyway, does anyone have any idea what the problem could be here? thanks

2

Answers


  1. Chosen as BEST ANSWER

    This ended up being a resource limits issue. There was one particular request that was causing memory usage to spike. This cause the container to be OOMKilled. This is what was leading to the 502 Bad gateway error message (because when it was killed, the container was no longer there to service the request).


  2. Ingress class moved from annotation to ingressClassName field, also you do not need to specify https before. Can you please try this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: application-web
      namespace: application
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /$1
        nginx.ingress.kubernetes.io/ssl-passthrough: "true" 
    spec:
      ingressClassName: nginx
      tls:
      - hosts:
        - www-dev.application.com
        secretName: application-ingress-tls
      rules:
      - host: www-dev.application.com
        http:
          paths:
          - path: /(.*)
            pathType: Prefix
            backend:
              service:
                name: applicationwebsvc
                port: 
                  number: 80
    

    Please also check to ingress documentation.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search