skip to Main Content

I am having problems trying to get communication between two services in a kubernetes cluster. We are using a kong ingress object as an ‘api gateway’ to reroute http
calls from a simple Angular frontend to send it to a .NET Core 3.1 API Controller Interface backend.

In front of these two ClusterIP services sits an ingress controller to take external http(s) calls from our kubernetes cluster to launch the frontend service. This ingress is shown here:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  namespace: kong
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: app.***.*******.com     <<  Obfuscated
      http:
        paths:
            - path: /
              backend:
                serviceName: frontend-service
                servicePort: 80

The first service is called ‘frontend-service’, a simple Angular 9 frontend that allows me to type in http strings and submit those strings to the backend.
The manifest yaml file for this is shown below. Note that the image name is obfuscated for various reasons.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: kong
  labels:
    app: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: frontend
        image: ***********/*******************:****  << Obfuscated
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: kong
  name: frontend-service
spec:
  type: ClusterIP  
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP

The second service is a simple .NET Core 3.1 API interface that prints back some text when the controller is reached. The backend service is called ‘dataapi’ and has one simple Controller in it called ValuesController.

The manifest yaml file for this is shown below.

  replicas: 1
  selector:
    matchLabels:
      app: dataapi
  template:
    metadata:
      labels:
        app: dataapi
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: dataapi
        image: ***********/*******************:****  << Obfuscated
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: dataapi
  namespace: kong
  labels:
    app: dataapi
spec:
  ports:
  - port: 80
    name: http
    targetPort: 80
  selector:
    app: dataapi

We are using a kong ingress as a proxy to redirect incoming http calls to the dataapi service. This manifest file is shown below:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-gateway
  namespace: kong
spec:
  ingressClassName: kong
  rules:
  - http:
      paths:
      - path: /dataapi
        pathType: Prefix
        backend:
          service:
            name: dataapi
            port:
              number: 80

Performing a ‘kubectl get all’ produces the following output:

kubectl get all

NAME                                READY   STATUS    RESTARTS   AGE
pod/dataapi-dbc8bbb69-mzmdc         1/1     Running   0          2d2h
pod/frontend-5d5ffcdfb7-kqxq9       1/1     Running   0          65m
pod/ingress-kong-56f8f44fd5-rwr9j   2/2     Running   0          6d

NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
service/dataapi                   ClusterIP      10.128.72.137    <none>         80/TCP,443/TCP               2d2h
service/frontend-service          ClusterIP      10.128.44.109    <none>         80/TCP                       2d
service/kong-proxy                LoadBalancer   10.128.246.165   XX.XX.XX.XX    80:31289/TCP,443:31202/TCP   6d
service/kong-validation-webhook   ClusterIP      10.128.138.44    <none>         443/TCP                      6d

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dataapi        1/1     1            1           2d2h
deployment.apps/frontend       1/1     1            1           2d
deployment.apps/ingress-kong   1/1     1            1           6d

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/dataapi-dbc8bbb69         1         1         1       2d2h
replicaset.apps/frontend-59bf9c75dc       0         0         0       25h
replicaset.apps/ingress-kong-56f8f44fd5   1         1         1       6d

and ‘kubectl get ingresses’ gives:

NAME                    CLASS    HOSTS  (Obfuscated)
ingress-nginx           <none>   ***.******.com,**.********.com,**.****.com,**.******.com  + 1 more...        xx.xx.xxx.xx   80      6d                                                                             ADDRESS        PORTS   AGE
kong-gateway            kong     *                                                                            xx.xx.xxx.xx   80      2d2h

From the frontend, the expectation is that constructing the http string:

http://kong-proxy/dataapi/api/values

will enter our ‘values’ controller in the backend and return the text string from that controller.

Both services are running on the same kubernetes cluster, here using Linode. Our thinking is that it is a ‘within cluster’ communication between two services both of type ClusterIP.

The error reported in the Chrome console is:

zone-evergreen.js:2828 GET http://kong-proxy/dataapi/api/values net::ERR_NAME_NOT_RESOLVED

Note that we had found a similar StackOverflow issue as ours and the suggestion in that result was to add ‘default.svc.cluster.local’ to the http string as follows:

http://kong-proxy.default.svc.cluster.local/dataapi/api/values

This did not work. We also substituted kong, which is the namespace of the service, for default like this:

http://kong-proxy.kong.svc.cluster.local/dataapi/api/values

yielding the same errors as above.

Is there a critical step I am missing? Any advice is greatly appreciated!

*************** UPDATE From Eric Gagnon’s Response(s) **************

Again, thank you Eric for Responding. Here are what my colleague and I have tried per your suggestions

  1. Pod dns misconfiguration: check if pod’s first nameserver equals ‘kube-dns’ svc ip and if search start with kong.svc.cluster.local:
kubectl exec -i -t -n kong frontend-simple-deployment-7b8b9cfb44-f2shk -- cat /etc/resolv.conf

nameserver 10.128.0.10
search kong.svc.cluster.local svc.cluster.local cluster.local members.linode.com
options ndots:5

kubectl get -n kube-system svc 

NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.128.0.10   <none>        53/UDP,53/TCP,9153/TCP   55d

kubectl describe -n kube-system svc kube-dns

Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=KubeDNS
Annotations:       lke.linode.com/caplke-version: v1.19.9-001
                   prometheus.io/port: 9153
                   prometheus.io/scrape: true
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP:                10.128.0.10
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         10.2.4.10:53,10.2.4.14:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         10.2.4.10:53,10.2.4.14:53
Port:              metrics  9153/TCP
TargetPort:        9153/TCP
Endpoints:         10.2.4.10:9153,10.2.4.14:9153
Session Affinity:  None
Events:            <none>    
  1. App Not using pod dns: in Node, output dns.getServers() to console
I do not understand where and how to do this.  We tried to add DNS directly inside our Angular frontend app, but we found out it is not possible to add this.
  1. Kong-proxy doesn’t like something: set logging debug, hit the app a bunch of times, and grep logs.

We have tried two tests here. First, our kong-proxy service reachable from an ingress controller. Note that this is not our simple frontend app. It is nothing more than a proxy that passes an http string to a public gateway we have set up. This does work. We have exposed this through as:

http://gateway.cwg.stratbore.com/test/api/test

["Successfully pinged Test controller!!"]

kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test 

10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"

So this works.

But when we try and do it from a simple frontend interface running in the same cluster as our backend:

enter image description here

it does not work with the text shown in the text box. This command does not add anything new:

kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test 

The front end comes back with an error.

But if we do add this http text:

enter image description here

The kong-ingress pod is hit:

kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test 

10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
10.2.4.11 - - [17/Apr/2021:16:55:50 +0000] "GET /test/api/test HTTP/1.1" 200 52 "http://app-basic.cwg.stratbore.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"

but the frontend gets an error back.

So at this point, we have tried a lot of things to get our frontend app to successfully send an http to our backend and get a response back and we are unsuccessful. I have also tried various configurations of our nginx.conf file that is packaged with our frontend app but no luck there either.

I am about to package all of this up in a github project. Thanks.

2

Answers


  1. Chosen as BEST ANSWER

    After a lot of help from Eric G (thank you!) on this, and reading this previous StackOverflow, I finally solved the issue. As the answer in this link illustrates, our frontend pod was serving up our application in a web browser which knows NOTHING about Kubernetes clusters.

    As the link suggests, we added another rule in our nginx ingress to successfully route our http requests to the proper service

        - host: gateway.*******.com
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: gateway-service
                    port:
                      number: 80
    

    Then from our Angular frontend, we sent our HTTP requests as follows:

    ...
    http.get<string>("http://gateway.*******.com/api/name_of_contoller');
    ...
    

    And we were finally able to communicate with our backend service the way we wanted. Both frontend and backend in the same Kubernetes Cluster.


  2. Chris,

    I haven’t used linode or kong and don’t know what your frontend actually does, so I’ll just point out what I can see:

    • The simplest dns check is to curl (or ping, dig, etc.):

    • default path matching on nginx ingress controller is pathPrefix, so your nginx ingress with path: / and nginx.ingress.kubernetes.io/rewrite-target: / actually matches everything and rewrites to /. This may not be an issue if you properly specify all your ingresses so they take priority over "/".

    • you said ‘using a kong ingress as a proxy to redirect incoming’, just want to make sure you’re proxying (not redirecting the client).

    • Is chrome just relaying its upstream error from frontend-service? An external client shouldn’t be able to resolve the cluster’s urls (unless you’ve joined your local machine to the cluster’s network or done some other fancy trick). By default, dns only works within the cluster.

    • cluster dns generally follows [service name].[namespace name].svc.cluster.local. If dns cluster dns is working, then using curl, ping, wget, etc. from a pod in the cluster and pointing it to that svc will send it to the cluster svc ip, not an external ip.

    • is your dataapi service configured to respond to /dataapi/api/values or does it not care what the uri is?

    If you don’t have any network policies restricting traffic within a namespace, you should be able to create a test pod in the same namespace, and curl the service dns and the pod ip’s directly:

    apiVersion: v1
    kind: Pod
    metadata:
      name: curl-test
      namespace: kong
    spec:
      containers:
      - name: curl-test
        image: buildpack-deps
        imagePullPolicy: Always
        command:
        - "curl"
        - "-v"
        - "http://dataapi:80/dataapi/api/values"
      #nodeSelector:
      #  kubernetes.io/hostname: [a more different node's hostname]
    

    The pod should attempt dns resolution from the cluster. So it should find dataapi’s svc ip and curl port 80 path /dataapi/api/values. Service IP’s are virtual so they aren’t actually ‘reachable’. Instead, iptables routes them to the pod ip, which has an actual network endpoint and IS addressable.

    once it completes, just check the logs: kubectl logs curl-test, and then delete it.

    If this fails, the nature of the failure in the logs should tell you if it’s a dns or link issue. If it works, then you probably don’t have a cluster dns issue. But it’s possible you have an inter-node communication issue. To test this, you can run the same manifest as above, but uncomment the node selector field to force it to run on a different node than your kong-proxy pod. It’s a manual method, but it’s quick for troubleshooting. Just rinse and repeat as needed for other nodes.

    Of course, it may not be any of this, but hopefully this helps troubleshoot.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search