skip to Main Content

I’m currently trying to deploy NGINX Ingress Controller on my AWS eks Cluster.
I have 4 nodes running:

NAME                            STATUS   ROLES    AGE     VERSION
ip-10-230-35-48.ec2.internal    Ready    <none>   7h44m   v1.19.6-eks-49a6c0
ip-10-230-39-9.ec2.internal     Ready    <none>   7h42m   v1.19.6-eks-49a6c0
ip-10-230-42-241.ec2.internal   Ready    <none>   7h49m   v1.19.6-eks-49a6c0
ip-10-230-49-228.ec2.internal   Ready    <none>   7h46m   v1.19.6-eks-49a6c0

I’m deploying my ingress-nginx-controller with the NGINX Ingress Controller Installation guide and using the deploy-tls-termination.yaml

For some reasons the AWS ELB is not marking all the nodes as healthy and gives the following errors:

Instance has failed at least the UnhealthyThreshold number of health checks consecutively.

The only node marked as healthy is the node where the ingress-nginx-controller is deployed.
enter image description here

Am I missing something in my yaml configuration file? Or should I deploy one ingress-nginx-controller per Availibity Zone? If so, how?

Thank you

2

Answers


  1. Actually it’s according to implementation of ingress controller and ELB.
    The ELB is recognize only node where ingress controller pod is running. The rest of nodes is OutOfService. If ingress-controller pod will be removed to another node then ELB recognize this node as InService instance. You can try this by deleting the controller pod.

    The recommendation is using NLB or ALB loadbalancers with ingress controller.
    From version 1.18 of k8s NLB will be default for ingress loadbalncer.
    Try this tutorial for change loadbalancer type.

    Login or Signup to reply.
  2. This is expected behavior when externalTrafficPolicy is set to Local in the service (which is what you have). With externalTrafficPolicy: Local , you don’t get any extra hops – once the traffic arrives at the node, it doesn’t leave the node.
    Load Balancer will send traffic only to the nodes where the Ingress Controller pods are running. On the other nodes, the health check will return 503 and will be treated as unhealthy.

    Change the externalTrafficPolicy to Cluster if you want all nodes to be healthy.

    This is generally not recommended though as by doing this the client’s IP address is not propagated to the end Pods. But this is only true for NLB’s and not Classic Elastic Load Balancers. So, best is to use NLB with nginx ingress controller. If you still want all nodes to be healthy, stick with Local policy & use a daemon set.

    Official documentation around this.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search