skip to Main Content

I’m struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven’t drawn any answers, I decided to simplify the issue in several aspects.

First, I’ve created a deployment which should work without any configuration. Based on this article, I did

  1. kubectl create namespace tests

  2. created file probe-service.yaml based on paulbouwer/hello-kubernetes:1.8 and deployed it kubectl create -f probe-service.yaml -n tests:

    apiVersion: v1
    kind: Service
    metadata:
      name: hello-kubernetes-first
    spec:
      type: ClusterIP
      ports:
      - port: 80
        targetPort: 8080
      selector:
        app: hello-kubernetes-first
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-kubernetes-first
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello-kubernetes-first
      template:
        metadata:
          labels:
            app: hello-kubernetes-first
        spec:
          containers:
          - name: hello-kubernetes
            image: paulbouwer/hello-kubernetes:1.8
            ports:
            - containerPort: 8080
            env:
            - name: MESSAGE
              value: Hello from the first deployment!
    
  3. created ingress.yaml and applied it (kubectl apply -f .probesingress.yaml -n tests)

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-kubernetes-ingress
    spec:
      rules:
      - host: test.projectname.org
        http:
          paths:
          - pathType: Prefix
            path: "/test"
            backend:
              service:
                name: hello-kubernetes-first
                port:
                  number: 80
      - host: test2.projectname.org
        http:
          paths:
          - pathType: Prefix
            path: "/test2"
            backend:
              service:
                name: hello-kubernetes-first
                port:
                  number: 80
      ingressClassName: nginx
    

Second, I can see that DNS actually point to the cluster and ingress rules are applied:

  • if I open http://test.projectname.org/test or any irrelevant path (http://test.projectname.org/test3), I’m shown NET::ERR_CERT_AUTHORITY_INVALID, but
  • if I use "open anyway" in browser, irrelevant paths give ERR_TOO_MANY_REDIRECTS while http://test.projectname.org/test gives Cannot GET /test

Now, TLS issues aside (those deserve a separate question), why can I get Cannot GET /test? It looks like ingress controller (ingress-nginx) got the rules (otherwise it wouldn’t descriminate paths; that’s why I don’t show DNS settings, although they are described in the previous question) but instead of showing the simple hello-kubernetes page at /test it returns this simple 404 message. Why is that? What could possibly go wrong? How to debug this?

Some debug info:

  • kubectl version --short tells Kubernetes Client Version is v1.21.5 and Server Version is v1.20.7-eks-d88609

  • kubectl get ingress -n tests shows that hello-kubernetes-ingress exists indeed, with nginx class, 2 expected hosts, address equal to that shown for load balancer in AWS console

  • kubectl get all -n tests shows

    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/hello-kubernetes-first-6f77d8ff99-gjw5d   1/1     Running   0          5h4m
    pod/hello-kubernetes-first-6f77d8ff99-ptwsn   1/1     Running   0          5h4m
    pod/hello-kubernetes-first-6f77d8ff99-x8w87   1/1     Running   0          5h4m
    
    NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    service/hello-kubernetes-first   ClusterIP   10.100.18.189   <none>        80/TCP    5h4m
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/hello-kubernetes-first   3/3     3            3           5h4m
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/hello-kubernetes-first-6f77d8ff99   3         3         3       5h4m
    
  • ingress-nginx was installed before me via the following chart:

    apiVersion: v2
    name: nginx
    description: A Helm chart for Kubernetes
    type: application
    version: 4.0.6
    appVersion: "1.0.4"
    dependencies:
    - name: ingress-nginx
      version: 4.0.6
      repository: https://kubernetes.github.io/ingress-nginx
    

    and the values overwrites applied with the chart differ from the original ones mostly (well, those got updated since the installation) in extraArgs: default-ssl-certificate: "nginx-ingress/dragon-family-com" is uncommneted

PS To answer Andrew, I indeed tried to setup HTTPS but it seemingly didn’t help, so I haven’t included what I tried into the initial question. Yet, here’s what I did:

  1. installed cert-manager, currently without a custom chart: kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml

  2. based on cert-manager’s tutorial and SO question created a ClusterIssuer with the following config:

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-backoffice
    
    spec:
      acme:
        server: https://acme-staging-v02.api.letsencrypt.org/directory
        # use https://acme-v02.api.letsencrypt.org/directory after everything is fixed and works
        privateKeySecretRef: # this secret will be created in the namespace of cert-manager
          name: letsencrypt-backoffice-private-key
        # email: <will be used for urgent alerts about expiration etc>
    
        solvers:
        # TODO: add for each domain/second-level domain/*.projectname.org
        - selector:
            dnsZones:
              - test.projectname.org
              - test2.projectname.org
          # haven't made it to work yet, so switched to the simpler to configure http01 challenge
          # dns01:
          #   route53:
          #     region: ... # that of load balancer (but we also have ...)
          #     accessKeyID: <of IAM user with access to Route53>
          #     secretAccessKeySecretRef: # created that
          #       name: route53-credentials-secret
          #       key: secret-access-key
          #     role: arn:aws:iam::645730347045:role/cert-manager
          http01:
            ingress:
              class: nginx
    

    and applied it via kubectl apply -f issuer.yaml

  3. created 2 certificates in the same file and applied it again:

    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: letsencrypt-certificate
    spec:
      secretName: tls-secret
      issuerRef:
        kind: ClusterIssuer
        name: letsencrypt-backoffice
      commonName: test.projectname.org
      dnsNames:
      - test.projectname.org
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: letsencrypt-certificate-2
    spec:
      secretName: tls-secret-2
      issuerRef:
        kind: ClusterIssuer
        name: letsencrypt-backoffice
      commonName: test2.projectname.org
      dnsNames:
      - test2.projectname.org
    
  4. made sure that the certificates are issued correctly (skipping the pain part, the result is: kubectl get certificates shows that both certificates have READY = true and both tls secrets are created)

  5. figured that my ingress is in another namespace and secrets for tls in ingress spec can only be referred in the same namespace (haven’t tried the wildcard certificate and --default-ssl-certificate option yet), so for each one copied them to tests namespace:

    1. opened existing secret, like kubectl edit secret tls-secret-2, copied data and annotations
    2. created an empty (Opaque) secret in tests: kubectl create secret generic tls-secret-2-copy -n tests
    3. opened it (kubectl edit secret tls-secret-2-copy -n tests) and inserted data an annotations
  6. in ingress spec, added the tls bit:

    tls:
    - hosts:
      - test.projectname.org
      secretName: tls-secret-copy
    - hosts:
      - test2.projectname.org
      secretName: tls-secret-2-copy
    
  7. I hoped that this will help, but actually it made no difference (I get ERR_TOO_MANY_REDIRECTS for irrelevant paths, redirect from http to https, NET::ERR_CERT_AUTHORITY_INVALID at https and Cannot GET /test if I insist on getting to the page)

2

Answers


  1. Chosen as BEST ANSWER

    Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with / path, like this:

      - host: test3.projectname.org
        http:
          paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: hello-kubernetes-first
                port:
                  number: 80
    

    I've finally got the hello kubernetes thing on http://test3.projectname.org. I have succeeded with TLS after a number of attempts/research and some help in a separate question.

    But I haven't succeeded with actual debugging: looking at kubectl logs -n nginx <pod name, see kubectl get pod -n nginx> doesn't really help understanding what path was passed to the service and is rather difficult to understand (can't even find where those IPs come from: they are not mine, LB's, cluster IP of the service; neither I understand what tests-hello-kubernetes-first-80 stands for – it's just a concatenation of namespace, service name and port, no object has such name, including ingress):

    192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
     "-" "<browser's user-agent header value>" 448 0.002
     [tests-hello-kubernetes-first-80] [] 192.168.49.95:8080 144 0.000 404 <some hash>
    

    Any more pointers on debugging will be helpful; also suggestions regarding correct path ~rewriting for nginx-ingress are welcome.


  2. Since you’ve used your own answer to complement the question, I’ll kind of answer all the things you asked, while providing a divide and conquer strategy to troubleshooting kubernetes networking.

    At the end I’ll give you some nginx and IP answers

    This is correct

    - host: test3.projectname.org
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-kubernetes-first
            port:
              number: 80
    

    Breaking down troubleshooting with Ingress

    1. DNS
    2. Ingress
    3. Service
    4. Pod
    5. Certificate

    1.DNS

    you can use the command dig to query the DNS

    dig google.com
    dig result

    1. Ingress

    the ingress controller doesn’t look for the IP, it just looks for the headers

    you can force a host using any tool that lets you change the headers, like curl

    curl --header 'Host: test3.projectname.com' http://123.123.123.123 (your public IP)

    1. Service

    you can be sure that your service is working by creating ubuntu/centos pod, using kubectl exec -it podname -- bash and trying to curl your service form withing the cluster

    1. Pod

    You’re getting this

    192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
     "-" "<browser's user-agent header value>" 448 0.002
    

    This part GET /test2 means that the request got the address from the DNS, went all the way from the internet, found your clusters, found your ingress controller, got through the service and reached your pod. Congratz! Your ingress is working!

    But why is it returning 404?

    The path that was passed to the service and from the service to the pod is /test2

    Do you have a file called test2 that nginx can serve? Do you have an upstream config in nginx that has a test2 prefix?

    That’s why, you’re getting a 404 from nginx, not from the ingress controller.

    Those IPs are internal, remember, the internet traffic ended at the cluster border, now you’re in an internal network. Here’s a rough sketch of what’s happening

    Let’s say that you’re accessing it from your laptop. Your laptop has the IP 192.168.123.123, but your home has the address 7.8.9.1, so when your request hits the cluster, the cluster sees 7.8.9.1 requesting test3.projectname.com.

    The cluster looks for the ingress controller, which finds a suitable configuration and passed the request down to the service, which passes the request down to the pod.

    So,

    your router can see your private IP (192.168.123.123)
    Your cluster(ingress) can see your router's IP (7.8.9.1)
    Your service can see the ingress's IP (192.168.?.?)
    Your pod can see the service's IP (192.168.14.57)
    

    It’s a game of pass around.
    If you want to see the public IP in your nginx logs, you need to customize it to get the X-Real-IP header, which is usually where load-balancers/ingresses/ambassador/proxies put the actual requester public IP.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search