We are have enabled horizontal pod autoscaling in GKE, our pods are sitting behind a clusterIP type service and we are routing public traffic to that Service using NGINX Ingress controller.
When monitoring the usages we have noticed that traffic is not equally distributed between pods. it’s routing traffic to one single pod. but whenever we manually deleted that particular pod it will route traffic to another available pod.
Is there any way we can enable ingress rules to distribute traffic equally
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/load-balance: round_robin
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.org/proxy-connect-timeout: 30s
nginx.org/proxy-read-timeout: 20s
generation: 11
name: test-ingress
namespace: default
spec:
rules:
- host: gateway.example.com
http:
paths:
- backend:
serviceName: gateway-443
servicePort: 443
path: /
- backend:
serviceName: gateway-80
servicePort: 80
path: /
Service manifest
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
serviceloadbalancer/lb.cookie-sticky-session: "false"
serviceloadbalancer/lb.host: gateway.example.com
serviceloadbalancer/lb.sslTerm: "true"
labels:
name: gateway-default
port: gateway-default-8243
name: gateway-8243
namespace: default
spec:
clusterIP: 10.20.215.122
ports:
- name: pass-through-https
port: 443
protocol: TCP
targetPort: 8243
selector:
name: gatway-default
sessionAffinity: ClientIP
type: ClusterIP
2
Answers
Finally, I have figure out the issue, reason is setting sessionAffinity: ClientIP,
Only If you want to make sure that connections from a particular client are passing to the same Pod each time, you should set the session affinity to ClientsIP it will route traffic based on the Client's IP addresses. To equally distribute traffic between pods you can use the value "None" or remove this sessionAffinity parameter because the default value is none.
References,
https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace
How to use Session Affinity on requests to Kubernetes service?
Your Ingress should have a serviceName which in your case is "gateway-443" and "gateway-80" but the actual name specified in the Service in metadata.name is "gateway-8243".
(If this is on purpose, please post the YAML of the other resources so I can take a look at the whole setup.)
Also please take a look at this page that has lots of good examples on how to achieve what you are looking to do.