skip to Main Content

I have an aks cluster with istio and its external ingress gateway installed. When I create an ingress resource, I can not access it through the load balancer IP. The request times out after a while. Everything works fine if I create a virtual service and gateway resource. Any ideas why is that?

curl -vvvv -H "Host:demo-app-deployment.localdev.me" $INGRESS_HOST:$INGRESS_PORT

Ingress and App Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: demo-nginx
  name: demo-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo-nginx
  strategy: {}
  template:
    metadata:
      labels:
        app: demo-nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
          name: web
        resources: 
          limits:
            cpu: 1000m
            memory: 200Mi
          requests:
            cpu: 10m
            memory: 200Mi
status: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: demo-nginx
  name: demo-nginx
spec:
  ports:
  - port: 80
    name: "web"
    protocol: TCP
    targetPort: "web"
  selector:
    app: demo-nginx
status:
  loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-nginx
  annotations:
    kubernetes.io/ingress.class: istio
spec:
  ingressClassName: istio
  rules:
  - host: demo-app-deployment.localdev.me
    http:
      paths:
      - backend:
          service:
            name: demo-nginx
            port:
              number: 80
        path: /
        pathType: Prefix

Logs of ingressgateway pod:

kubectl logs aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq  -n aks-istio-ingress 
2024-04-28T14:21:39.798546Z     info    FLAG: --concurrency="0"
2024-04-28T14:21:39.798569Z     info    FLAG: --domain="aks-istio-ingress.svc.cluster.local"
2024-04-28T14:21:39.798574Z     info    FLAG: --help="false"
2024-04-28T14:21:39.798576Z     info    FLAG: --log_as_json="false"
2024-04-28T14:21:39.798579Z     info    FLAG: --log_caller=""
2024-04-28T14:21:39.798581Z     info    FLAG: --log_output_level="default:info"
2024-04-28T14:21:39.798583Z     info    FLAG: --log_rotate=""
2024-04-28T14:21:39.798585Z     info    FLAG: --log_rotate_max_age="30"
2024-04-28T14:21:39.798588Z     info    FLAG: --log_rotate_max_backups="1000"
2024-04-28T14:21:39.798590Z     info    FLAG: --log_rotate_max_size="104857600"
2024-04-28T14:21:39.798592Z     info    FLAG: --log_stacktrace_level="default:none"
2024-04-28T14:21:39.798597Z     info    FLAG: --log_target="[stdout]"
2024-04-28T14:21:39.798599Z     info    FLAG: --meshConfig="./etc/istio/config/mesh"
2024-04-28T14:21:39.798613Z     info    FLAG: --outlierLogPath=""
2024-04-28T14:21:39.798615Z     info    FLAG: --profiling="true"
2024-04-28T14:21:39.798618Z     info    FLAG: --proxyComponentLogLevel="misc:error"
2024-04-28T14:21:39.798620Z     info    FLAG: --proxyLogLevel="warning"
2024-04-28T14:21:39.798623Z     info    FLAG: --serviceCluster="istio-proxy"
2024-04-28T14:21:39.798625Z     info    FLAG: --stsPort="0"
2024-04-28T14:21:39.798628Z     info    FLAG: --templateFile=""
2024-04-28T14:21:39.798630Z     info    FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2024-04-28T14:21:39.798634Z     info    FLAG: --vklog="0"
2024-04-28T14:21:39.798637Z     info    Version 1.21-dev-ee6e9f6a314224fd9bcb808ca1d74d1dc66adba8-Clean
2024-04-28T14:21:39.798848Z     warn    failed running ulimit command: 
2024-04-28T14:21:39.799033Z     info    Proxy role      ips=[10.244.1.13] type=router id=aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress domain=aks-istio-ingress.svc.cluster.local
2024-04-28T14:21:39.799097Z     info    Apply proxy config from env {"discoveryAddress":"istiod-asm-1-21.aks-istio-system.svc:15012","tracing":{"zipkin":{"address":"zipkin.aks-istio-system:9411"}},"gatewayTopology":{"numTrustedProxies":1},"image":{"imageType":"distroless"}}

2024-04-28T14:21:39.800779Z     info    cpu limit detected as 2, setting concurrency
2024-04-28T14:21:39.801054Z     info    Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod-asm-1-21.aks-istio-system.svc:15012
drainDuration: 45s
gatewayTopology:
  numTrustedProxies: 1
image:
  imageType: distroless
proxyAdminPort: 15000
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
  zipkin:
    address: zipkin.aks-istio-system:9411

2024-04-28T14:21:39.801079Z     info    JWT policy is third-party-jwt
2024-04-28T14:21:39.801086Z     info    using credential fetcher of JWT type in cluster.local trust domain
2024-04-28T14:21:39.801289Z     info    platform detected is Azure
2024-04-28T14:21:39.808311Z     warn    HTTP request unsuccessful with status: 400 Bad Request
2024-04-28T14:21:39.818317Z     info    Workload SDS socket not found. Starting Istio SDS Server
2024-04-28T14:21:39.818345Z     info    CA Endpoint istiod-asm-1-21.aks-istio-system.svc:15012, provider Citadel
2024-04-28T14:21:39.818369Z     info    Using CA istiod-asm-1-21.aks-istio-system.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2024-04-28T14:21:39.818890Z     info    Opening status port 15020
2024-04-28T14:21:39.826844Z     info    ads     All caches have been synced up in 28.703074ms, marking server ready
2024-04-28T14:21:39.827082Z     info    xdsproxy        Initializing with upstream address "istiod-asm-1-21.aks-istio-system.svc:15012" and cluster "Kubernetes"
2024-04-28T14:21:39.829173Z     info    Pilot SAN: [istiod-asm-1-21.aks-istio-system.svc]
2024-04-28T14:21:39.830220Z     info    sds     Starting SDS grpc server
2024-04-28T14:21:39.831508Z     info    starting Http service at 127.0.0.1:15004
2024-04-28T14:21:39.831793Z     info    Starting proxy agent
2024-04-28T14:21:39.832055Z     info    Envoy command: [-c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --allow-unknown-static-fields --log-format %Y-%m-%dT%T.%fZ        %l      envoy %n %g:%#  %v      thread=%t -l warning --component-log-level misc:error --concurrency 2]
2024-04-28T14:21:39.913967Z     info    xdsproxy        connected to upstream XDS server[1]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T14:21:39.933163Z     info    ads     ADS: new connection for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress-1
2024-04-28T14:21:39.934344Z     info    ads     ADS: new connection for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress-2
2024-04-28T14:21:39.987458Z     info    cache   generated new workload certificate      latency=158.869015ms ttl=23h59m59.012546407s
2024-04-28T14:21:39.987650Z     info    cache   Root cert has changed, start rotating root cert
2024-04-28T14:21:39.987740Z     info    ads     XDS: Incremental Pushing ConnectedEndpoints:2 Version:
2024-04-28T14:21:39.987861Z     info    cache   returned workload trust anchor from cache       ttl=23h59m59.012140303s
2024-04-28T14:21:39.988000Z     info    cache   returned workload certificate from cache        ttl=23h59m59.012001502s
2024-04-28T14:21:39.988384Z     info    ads     SDS: PUSH request for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress resources:1 size:4.0kB resource:default
2024-04-28T14:21:39.988477Z     info    cache   returned workload trust anchor from cache       ttl=23h59m59.011525197s
2024-04-28T14:21:39.988781Z     info    ads     SDS: PUSH request for node:aks-istio-ingressgateway-external-asm-1-21-848779b5b7-2jjdq.aks-istio-ingress resources:1 size:1.1kB resource:ROOTCA
2024-04-28T14:21:39.988932Z     info    cache   returned workload trust anchor from cache       ttl=23h59m59.011069593s
2024-04-28T14:21:40.839485Z     info    Readiness succeeded in 1.049696312s
2024-04-28T14:21:40.839824Z     info    Envoy proxy is ready
2024-04-28T14:49:45.370889Z     info    xdsproxy        connected to upstream XDS server[2]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T15:21:34.608757Z     info    xdsproxy        connected to upstream XDS server[3]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T15:49:47.486824Z     info    xdsproxy        connected to upstream XDS server[4]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T16:22:08.624015Z     info    xdsproxy        connected to upstream XDS server[5]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T16:49:38.978403Z     info    xdsproxy        connected to upstream XDS server[6]: istiod-asm-1-21.aks-istio-system.svc:15012
2024-04-28T17:16:57.062446Z     info    xdsproxy        connected to upstream XDS server[7]: istiod-asm-1-21.aks-istio-system.svc:15012

2

Answers


  1. Istio primarily relies on its own set of custom resources like Gateway and Virtual Service to manage ingress traffic. Therefore, Istio may not correctly process some Ingress configurations, leading to timeouts when trying to access the application through the load balancer IP. In order for your AKS cluster with Istio to handle ingress resources, make sure that Istio is installed, and the Ingress Gateway is deployed in your cluster.

    kubectl get pods -n <istio-system-namespace>
    kubectl get svc -n <istio-system-namespace>
    

    enter image description here

    You should see the istio-ingressgateway service, which is typically of type Load Balancer.

    Here, in this example, a sample Nginx application is deployed.

    Example deployment-

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-demo
      namespace: yournamespace
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx-demo
      template:
        metadata:
          labels:
            app: nginx-demo
        spec:
          containers:
          - name: nginx
            image: nginx:stable
            ports:
            - containerPort: 80
    

    enter image description here

    Need to create a service that points to the Nginx deployment.
    Example service

    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-demo-service
      namespace: yournamespace
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
        name: http
      selector:
        app: nginx-demo
    

    Then create an Istio Gateway and VirtualService to expose the Nginx service.
    Example Gateway

    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: nginx-gateway
      namespace: yournamespace
    spec:
      selector:
        istio: ingressgateway-external-asm-1-20 # This should match the label of your external gateway.
      servers:
      - port:
          number: 80
          name: http
          protocol: HTTP
        hosts:
        - "*"
    

    Then, create the nginx-virtualservice.
    Example virtual service

    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: nginx-virtualservice
      namespace: yournamespace
    spec:
      hosts:
      - "*"
      gateways:
      - nginx-gateway
      http:
      - match:
        - uri:
            prefix: /
        route:
        - destination:
            host: nginx-demo-service
            port:
              number: 80
    

    Apply them kubectl apply -f <filename.yaml>

    To access your service, you need the external IP of the Istio Ingress Gateway.

    kubectl get svc -n <youristionamespace>
    

    enter image description here

    You can now either curl http://<EXTERNAL_IP&gt;
    or ensure that the hostname (demo-app-deployment.localdev.me in your case) is correctly resolving to the external IP of the Istio Ingress Gateway. You can temporarily add it to your /etc/hosts file

    57.151.36.51 demo-app-deployment.localdev.me
    

    and test it curl http://demo-app-deployment.localdev.me

    other things to check-
    Check the service endpoints to ensure they are ready and that the pod IPs match

    kubectl get endpoints nginx-demo-service -n <yournamespace>
    

    enter image description here

    check the logs of the ingress gateway pod to see if there are any errors kubectl logs <ingress-gateway-pod-name> -n <namespace>. Additionally, it is important to ensure that the Ingress resource is associated with the correct Istio Gateway and VirtualService.

    References:

    Login or Signup to reply.
  2. check this please: https://medium.com/microsoftazure/cert-manager-and-istio-choosing-ingress-options-for-the-istio-based-service-mesh-add-on-for-aks-c633c97fa4f2 it might give you the answer and a possible solution. Many thanks to Saverio Proto.

    I run into the identical problem with azure istio add on. And yes, we need ingress for letsencrypt certifacate generation. Because that one create an ingress for resolving the letsencrypt challange each time it generate a certicficate.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search