skip to Main Content

I have installed the AWS Load Balancer controller on the cluster but when I create a load-balanced service it creates a network load balancer which does not work.

apiVersion: v1
kind: Service
metadata:
  name: ######
  namespace: mb
  annotations
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ########
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
spec:
  ports:
    - port: 443
      targetPort: 4546
  selector:
    app: ######
  type: LoadBalancer

When I try to reach the endpoint I get the following response:

curl: (52) Empty reply from server

I have tried disabling the node ports on the service, I have tried using an application load balancer, I have tried creating an ingress to route traffic to my service and I have tried updating the target groups route to the node port.

However, the problem still persists and I still cannot work out how to create a load balancer to reach the workload in the eks fargate cluster.

Any help is appreciated.

2

Answers


  1. To configure a target group to reach a service in your EKS Fargate cluster using the AWS Load Balancer controller, you need to ensure that you have the correct annotations and configurations in your Service manifest.

    Here’s an example of a modified Service manifest that should work for your scenario:

    apiVersion: v1
    kind: Service
    metadata:
      name: ######
      namespace: mb
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: nlb
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
    spec:
      selector:
        app: YOUR_APP_LABEL
      ports:
        - port: 443
          targetPort: 4546
      type: LoadBalancer
    

    Here are the changes made to the original manifest:

    1. The annotation service.beta.kubernetes.io/aws-load-balancer-type is set to nlb, indicating that a Network Load Balancer (NLB) should be created. This is the correct annotation for using an NLB.

    2. The annotation service.beta.kubernetes.io/aws-load-balancer-backend-protocol is set to http. This is the protocol the Load Balancer will use to communicate with your service.

    3. The annotation service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout is set to 3600 (1 hour). You can adjust this value as per your requirements.

    4. The type field is set to LoadBalancer. This ensures that the service is exposed externally using the Load Balancer.

    Make sure to replace YOUR_APP_LABEL with the appropriate label that matches your application deployment.

    Once you apply this manifest, the AWS Load Balancer controller should create an NLB and configure the target group correctly. It may take some time for the NLB to become fully active and for DNS resolution to occur. After that, you should be able to access your workload through the NLB’s DNS name or IP address.

    Login or Signup to reply.
  2. You need to deploy an nginx ingress to enable traffic to reach your service. Follow this guide: https://github.com/nginxinc/helm-charts

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search