skip to Main Content

I have a simple use case; I have 4 Microservices lets say service-a, service-b, service-c, service-d.

For the purpose of testing, I want to split traffic based on weight like

  • 40% to service-a
  • 20% to service-b
  • 10% to service-c
  • 30% to service-d

Considering they will be accessed over same path: example.com/

I am planinng to go with NGINX Ingress Controller but I saw a limitation on documentation: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary

Currently a maximum of one canary ingress can be applied per Ingress rule.

There is a GitHub issue too related to the same: https://github.com/kubernetes/ingress-nginx/issues/5848

I am not able to understand what does this actually mean and will this limitation not allow me implement with 4 services. Does that mean I have to create 4 canary ingress with single ingress rule for all 4 services? All the example for traffic splitting using ingress controller is of 2 services. should I consider Istio as it does not have this limitation

Can someone please explain me with simple yaml code example about this limitation?

2

Answers


  1. If you need to implement traffic splitting across multiple microservices in AKS using a load balancer like NGINX Ingress Controller, and are facing limitations with the canary configuration, a better approach could be using Istio, a service mesh that provides advanced traffic management capabilities.

    install and setup istio first

    enter image description here

    Create Deployment and Service for Each Microservice (service-a, service-b, service-c, and service-d) using your docker image ( I’ll use nginxdemos/hello image here)

    # deployment-service-a.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: service-a
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: service-a
      template:
        metadata:
          labels:
            app: service-a
        spec:
          containers:
          - name: service-a
            image: nginxdemos/hello
            ports:
            - containerPort: 80
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: service-a
    spec:
      ports:
      - port: 80
        targetPort: 80
      selector:
        app: service-a
    

    repeat for b, c and d same way as above and apply kubectl apply -f <filename>.yaml

    enter image description here
    enter image description here
    enter image description here

    next steps involve setting up the Istio Gateway and Virtual Service to handle the traffic distribution according to your specified weights. This will enable you to route incoming traffic to your services service-a, service-b, and service-c at the specified ratios

    Create Istio Gateway

    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: example-gateway
    spec:
      selector:
        istio: ingressgateway
      servers:
      - port:
          number: 80
          name: http
          protocol: HTTP
        hosts:
        - "*"
    

    Create Istio VirtualService

    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: my-virtualservice
      namespace: default  # Ensure this is the correct namespace for your services
    spec:
      hosts:
      - "*"  # This can be specific to your domain if needed
      gateways:
      - my-gateway
      http:
      - route:
        - destination:
            host: service-a.default.svc.cluster.local  # Adjust the FQDN as necessary
          weight: 40
        - destination:
            host: service-b.default.svc.cluster.your-cluster.com
          weight: 20
        - destination:
            host: service-c.default.svc.cluster.your-cluster.com
          weight: 10
        - destination:
            host: service-d.default.svc.cluster.your-cluster.com
          weight: 30
    
    
    

    Adjust the namespace, host, and port parameters accordingly

    kubectl apply -f istio-gateway.yaml
    kubectl apply -f istio-virtualservice.yaml
    

    kubectl get svc istio-ingressgateway -n istio-system
    enter image description here

    You can now verify the VirtualService Configuration if it is matching your split –

    kubectl describe virtualservice my-virtualservice -n default
    

    enter image description here

    Login or Signup to reply.
  2. The docs you mention are from the community ingress controller, however
    this is possible using NGINX Ingress Controller’s VirtualServer CRD. NGINX Ingress Controller’s docs are here.

    There is a traffic splitting example in the examples folder in the repo, which I have modified slightly to be similar to your example.

    apiVersion: k8s.nginx.org/v1
    kind: VirtualServer
    metadata:
      name: example
    spec:
      host: example.com
      upstreams:
      - name: service-a
        service: service-a-svc
        port: 80
      - name: service-b
        service: service-b-svc
        port: 80
      - name: service-c
        service: service-c-svc
        port: 80
      - name: service-d
        service: service-d-svc
        port: 80
      routes:
      - path: /
        splits:
        - weight: 40
          action:
            pass: service-a
        - weight: 20
          action:
            pass: service-b
        - weight: 10
          action:
            pass: service-c
        - weight: 30
          action:
            pass: service-d
    

    In NGINX Ingress Controller the limitation on the number of splits would be 100, as only whole numbers are allowed currently in the CRD, and the weights must add up to 100.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search