skip to Main Content

I have created an AKS cluster using the following Terraform code

resource "azurerm_virtual_network" "test" {
  name                = var.virtual_network_name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  address_space       = [var.virtual_network_address_prefix]

  subnet {
    name           = var.aks_subnet_name
    address_prefix = var.aks_subnet_address_prefix
  }

  subnet {
    name           = "appgwsubnet"
    address_prefix = var.app_gateway_subnet_address_prefix
  }

  tags = var.tags
}

data "azurerm_subnet" "kubesubnet" {
  name                 = var.aks_subnet_name
  virtual_network_name = azurerm_virtual_network.test.name
  resource_group_name  = azurerm_resource_group.rg.name
  depends_on           = [azurerm_virtual_network.test]
}

resource "azurerm_kubernetes_cluster" "k8s" {
  name       = var.aks_name
  location   = azurerm_resource_group.rg.location
  dns_prefix = var.aks_dns_prefix

  resource_group_name = azurerm_resource_group.rg.name

  http_application_routing_enabled = false

  linux_profile {
    admin_username = var.vm_user_name

    ssh_key {
      key_data = file(var.public_ssh_key_path)
    }
  }

  default_node_pool {
    name            = "agentpool"
    node_count      = var.aks_agent_count
    vm_size         = var.aks_agent_vm_size
    os_disk_size_gb = var.aks_agent_os_disk_size
    vnet_subnet_id  = data.azurerm_subnet.kubesubnet.id
  }

  service_principal {
    client_id     = local.client_id
    client_secret = local.client_secret
  }

  network_profile {
    network_plugin     = "azure"
    dns_service_ip     = var.aks_dns_service_ip
    docker_bridge_cidr = var.aks_docker_bridge_cidr
    service_cidr       = var.aks_service_cidr
  }

  # Enabled the cluster configuration to the Azure kubernets with RBAC
  azure_active_directory_role_based_access_control { 
    managed                     = var.azure_active_directory_role_based_access_control_managed
    admin_group_object_ids      = var.active_directory_role_based_access_control_admin_group_object_ids
    azure_rbac_enabled          = var.azure_rbac_enabled
  }

  oms_agent {
    log_analytics_workspace_id  = module.log_analytics_workspace[0].id
  }

  timeouts {
    create = "20m"
    delete = "20m"
  }  

  depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
  tags       = var.tags
}

resource "azurerm_role_assignment" "ra1" {
  scope                = data.azurerm_subnet.kubesubnet.id
  role_definition_name = "Network Contributor"
  principal_id         = local.client_objectid
  depends_on = [data.azurerm_subnet.kubesubnet]
}

and followed the below steps to install the ISTIO as per the ISTIO documentation

#Prerequisites
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update

#create namespace
kubectl create namespace istio-system

# helm install istio-base and istiod
helm install istio-base istio/base -n istio-system
helm install istiod istio/istiod -n istio-system --wait

# Check the installation status
helm status istiod -n istio-system

#create namespace and enable istio-injection for envoy proxy containers
kubectl create namespace istio-ingress
kubectl label namespace istio-ingress istio-injection=enabled

## helm install istio-ingress for traffic management
helm install istio-ingress istio/gateway -n istio-ingress --wait

## Mark the default namespace as istio-injection=enabled
kubectl label namespace default istio-injection=enabled

## Install the App and Gateway
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml

# Check the Services, Pods and Gateway
kubectl get services
kubectl get pods
kubectl get gateway

# Ensure the app is running
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"

and it is responding as shown below

enter image description here

enter image description here

# Check the 
$INGRESS_NAME="istio-ingress"
$INGRESS_NS="istio-ingress"
kubectl get svc "$INGRESS_NAME" -n "$INGRESS_NS"

it returns the external IP as shown below

enter image description here

However, I am not able to access the application

enter image description here

Also I am getting an error while trying to run the following commands to find the ports

kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'
kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="https")].port}'
kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}'

3

Answers


  1. I tried to reproduce the same in my environment to create Sample Bookinginfo application, as i got same error.

    enter image description here

    To resolve the application issue, kindly follow the below steps.

    Install ISTIO using below cmdlets.

    #Prerequisites
    helm repo add istio https://istio-release.storage.googleapis.com/charts
    helm repo update
    
    #create namespace
    kubectl create namespace istio-system
    
    # helm install istio-base and istiod
    
    helm install istio-base istio/base -n istio-system
    helm install istiod istio/istiod -n istio-system --wait
    
    # Check the installation status
    helm status istiod -n istio-system
    
    #create namespace and enable istio-injection for envoy proxy containers
    
    kubectl create namespace istio-ingress
    kubectl label namespace istio-ingress istio-injection=enabled
    
    ## helm install istio-ingress for traffic management
    helm install istio-ingress istio/gateway -n istio-ingress --wait
    
    ## Mark the default namespace as istio-injection=enabled
    kubectl label namespace default istio-injection=enabled
    
    #Install the Application 
    kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml
    

    Check the Services, Pods using below cmdlet.

        kubectl get services
        kubectl get pods
      
    

    enter image description here

    Expose the application to the internet using following command.

    kubectl expose svc productpage --type=LoadBalancer --name=productpage-external --port=9080 --target-port=9080
    

    Check the External IP of the service using below command.

    kubectl get svc productpage-external
    

    enter image description here

    Access the application using External IP and Port in the browser.

    Ex url: http://20.121.165.179:9080/productpage
    

    enter image description here

    Login or Signup to reply.
  2. This is because the ingress gateway selector when installed with Helm is istio: ingress, instead of istio: ingressgateway when installed with istioctl.

    If you modify the Gateway to reflect this, then it should work:

    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: bookinfo-gateway
      namespace: default
    spec:
      selector:
        istio: ingress
    ...
    

    One way to show this (without knowing this issue previously) is with istioctl analyze:

    $ istioctl analyze
    Error [IST0101] (Gateway default/bookinfo-gateway) Referenced selector not found: "istio=ingressgateway"
    Error: Analyzers found issues when analyzing namespace: default.
    See https://istio.io/v1.16/docs/reference/config/analysis for more information about causes and resolutions.
    
    Login or Signup to reply.
  3. This is because you have hit general concerns of istio- prefix get striped, from the steps by steps installation with istio-ingress will stripe with ingress, so if you using istio-ingressgateway that could match with app selector , or change the app selector to match with it.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search