skip to Main Content

I am running Tanzu Community Edition (tce-linux-amd64-v0.12.1) on a virtual machine with Ubuntu OS. I have set up an unmanaged cluster on this single node which is running fine. I have also deployed one demo application. To access the same, I have installed MetalLB load balancer. But when I access the application through load balancer IP then I am getting connection failure error.

$ curl http://172.17.60.122/hello
curl: (7) Failed to connect to 172.17.60.122 port 80: No route to host

Below are my pod/service details.

$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/demo-7c896cddcb-tscj8   1/1     Running   0          26h

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
service/demo-service   LoadBalancer   10.96.152.201   172.17.60.122   80:30471/TCP   26h
service/kubernetes     ClusterIP      10.96.0.1       <none>          443/TCP        33d

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo   1/1     1            1           26h

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/demo-7c896cddcb   1         1         1       26h

Below is my node details.

$ kubectl get nodes -o wide
NAME                     STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
beepboop-control-plane   Ready    control-plane,master   33d   v1.22.7   172.19.0.2    <none>        Ubuntu 21.10   5.4.0-115-generic   containerd://1.5.10

My virtual machine’s IP address is 172.17.60.103

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:4c:5a:fe brd ff:ff:ff:ff:ff:ff
    inet 172.17.60.103/24 brd 172.17.60.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe4c:5afe/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:93:f7:dc:f2 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:93ff:fef7:dcf2/64 scope link
       valid_lft forever preferred_lft forever
4: br-908920d3a805: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ff:a9:9e:49 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-908920d3a805
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ffff:fea9:9e49/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::1/64 scope link
       valid_lft forever preferred_lft forever
6: vetha369654@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-908920d3a805 state UP group default
    link/ether 56:6f:69:00:93:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::546f:69ff:fe00:93a4/64 scope link
       valid_lft forever preferred_lft forever
7: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0

It looks like some network related issue as my node IP is 172.19.0.2 instead of 172.17.60.103. So it is taking IP of bridge network created by Tanzu and not my virtual machines IP.

I do not understand how the tanzu network is working here. Any help is appreciated.

2

Answers


  1. Note that there are some networking limitations when deploying unmanaged-cluster to a Windows host: https://tanzucommunityedition.io/docs/v0.12/ref-unmanaged-cluster/#deploy-to-windows

    The underlying technology is kind and unmanaged-cluster also allows for kind configurations to be dropped into ProviderConfiguration:

    ProviderConfiguration:
      rawKindConfig: |
        nodes:
        - role: control-plane
          extraPortMappings:
          - containerPort: 888
            hostPort: 888
            listenAddress: "127.0.0.1"
            protocol: TCP   
    

    This will deploy the unmanaged-cluster with specified kind networking configurations.

    Be sure to familiarize yourself with our docs and also with the kind load balancing docs: https://kind.sigs.k8s.io/docs/user/loadbalancer/

    What configuration did you use to deploy your unmanaged-cluster?

    Login or Signup to reply.
  2. I would also recommend asking this question in the TCE Community Channel on the Kubernetes Slack – https://kubernetes.slack.com/archives/C02GY94A8KT

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search