skip to Main Content

When I replicate the application on more than one pod, the web-app will return a Http 504 while using a NGINX load-balancer.

The NGINX load-balancer is outside of the K8 cluster and acts as a reverse-proxy + load-balancer. Hence, NGINX will forward the requests to one node, hosting the web-app container. Important: I don’t want the NGINX host to be part of the cluster. (As long as it can be prevented)

upstream website {
             ip_hash;
             server 1.1.1.1:30300;
             #server 2.2.2.2:30300;
}

server {
    listen                          443 ssl http2;
    server_name                     example.com;

    location / {
            proxy_pass http://website;

            proxy_cache off;
            proxy_buffering off;

            proxy_read_timeout 1d;
            proxy_connect_timeout 4;
            proxy_send_timeout 1d;

            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $http_connection;

            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

This config does work, if, and only if, the app has been published to the 1.1.1.1 node only. If I replicate the web-app to 2.2.2.2 as well, the snippet above will already lead to a 504, even thought 2.2.2.2 is still commented out. Commenting the 2.2.2.2 in, won’t change anything.

As far as I understood, the NodePort is a public-available port, mapping to an internal port. (Called port) Hence, NodePort 30300 will be forwarded to 2000, which is also my targetport the web-app listens on. Upon replication the second pod will also host the web-app (+ microservices) and expose itself to NodePort 30300. So we do have two NodePorts 30300 within our k8 network and I guess this might lead to confusion and routing issues.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: swiper-web
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: swiper

  template:
    metadata:
      labels:
        app: swiper
    spec:
      containers:
 
      - name: swiper-web-app-example
        image: docker.example.com/swiper.web.app.webapp:$(Build.BuildId)
        ports:
        - containerPort: 2000
        resources:
          limits:
            memory: "2.2G"
            cpu: "0.6"


      - name: swiper-web-api-oauth
        image: docker.example.com/swiper.web.api.oauth:$(Build.BuildId)
        ports:
        - containerPort: 2010
        resources:
          limits:
            memory: "100M"
            cpu: "0.1"

      imagePullSecrets:
      - name: regcred

      dnsPolicy: "None"
      dnsConfig:
        nameservers:
        - 8.8.8.8

---

apiVersion: v1
kind: Service
metadata:
  name: swiper-web-service
  namespace: default
spec:
  type: NodePort
  selector:
    app: swiper
  ports:
  - name: swiper-web-app-example
    port: 2000
    nodePort: 30300

  - name: swiper-web-api-oauth
    port: 2010

enter image description here

enter image description here

enter image description here

Edit:

Adding externalTrafficPolicy: Local to the swiper-web-service solves the issue. Both endpoints are now reachable. But the load-balacing of the other microservices is now disabled.

2

Answers


  1. Chosen as BEST ANSWER

    The issue was quite simple. The application uses SignalR to fetch data on demand. Each data-request could end up on a different node, leading to a borken connection state. (HTTP 504/502) The swiper-web-service was missing the sessionAffinity config. Adjusting the swiper-web-service to following fixes the issue.

    apiVersion: v1
    kind: Service
    metadata:
      name: swiper-web-service
      namespace: default
    spec:
      type: NodePort
      selector:
        app: swiper
      ports:
      - name: swiper-web-app-example
        port: 2000
        nodePort: 30300
    
      - name: swiper-web-api-oauth
        port: 2010
    
      sessionAffinity: ClientIP
      externalTrafficPolicy: Cluster
    

  2. No…there will be only one nodePort 30300 on all k8s nodes exposed for your service. How are you replicating your second pod? Are you setting replicas to 2 in deployment or some other way?

    Once you set replicas to 2 in deployment. It will provision another pod. Make sure that pod is running on separate node and not all pods are running on same k8s Node.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search