skip to Main Content

Hi I’m learning Helm and Kubernetes. I’m stuck, I can’t access my app from outside the cluster.
I’m using minikube v1.32.0 wit docker’s drivers.

I tried to use LoadBalancers, NodePorts, Tunnels, Services. Every time i get 404 or connection time out.

This is my app’s chart:

# Source: app-chart/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-app-chart
  labels:
    helm.sh/chart: app-chart-0.1.0
    app.kubernetes.io/name: app-chart
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: app-chart/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: app-service
spec:
  allocateLoadBalancerNodePorts: false
  selector:
    app: helmapp
  ports:
    - protocol: "TCP"
      # Port accessible inside cluster
      port: 8080
      # Port to forward to inside the pod
      targetPort: 8080
  type: LoadBalancer

My deployment:

# Source: app-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-app-chart
  labels:
    helm.sh/chart: app-chart-0.1.0
    app.kubernetes.io/name: app-chart
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: app-chart
      app.kubernetes.io/instance: release-name
      app: helmapp
  template:
    metadata:
      labels:
        helm.sh/chart: app-chart-0.1.0
        app.kubernetes.io/name: app-chart
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: "1.16.0"
        app.kubernetes.io/managed-by: Helm
        app: helmapp
    spec:
      serviceAccountName: release-name-app-chart
      securityContext:
        {}
      containers:
        - image: "myrepo/helm-app:latest"
          name: app-chart
          ports:
            - containerPort: 8080
              protocol: TCP
          resources:
            {}

My ingress:

# Source: app-chart/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: release-name-app-chart
  labels:
    helm.sh/chart: app-chart-0.1.0
    app.kubernetes.io/name: app-chart
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    - host: spring
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app-service
                port:
                  number: 8080

My app It’s a simple SpringBoot project with an endpoint that returns code 200.

    @GetMapping("/health")
    public ResponseEntity<String> healthCheck() {
        return ResponseEntity.ok("OK");
    }


2

Answers


  1. Chosen as BEST ANSWER

    Thanks to everyone, i solved my problem. The error was about my image. It was pulling an old version of my image so building and pushing a new one solved my problem. The tunnel started changing docker password. ( https://github.com/kubernetes/minikube/issues/11580#issuecomment-1337288420 )


  2. Minikube requires the use of minikube tunnel to issue an IP that allows you to access the service from outside the cluster.

    Once you have that running in a separate terminal window, you should find an IP assigned to your service, and when that happens you can use localhost:{port}/{path} to access your service


    I took your yaml and created this in order to test if the setup was correct (I swapped out your image for the basic tomcat to check if its your image or config. Tomcat still runs on 8080)

    # Source: app-chart/templates/serviceaccount.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: release-name-app-chart
      labels:
        helm.sh/chart: app-chart-0.1.0
        app.kubernetes.io/name: app-chart
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: "1.16.0"
        app.kubernetes.io/managed-by: Helm
    automountServiceAccountToken: true
    ---
    # Source: app-chart/templates/service.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: app-service
    spec:
      allocateLoadBalancerNodePorts: false
      selector:
        app: helmapp
      ports:
        - protocol: "TCP"
          # Port accessible inside cluster
          port: 8080
          # Port to forward to inside the pod
          targetPort: 8080
      type: LoadBalancer
    ---
    # Source: app-chart/templates/deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: release-name-app-chart
      labels:
        helm.sh/chart: app-chart-0.1.0
        app.kubernetes.io/name: app-chart
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: "1.16.0"
        app.kubernetes.io/managed-by: Helm
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: app-chart
          app.kubernetes.io/instance: release-name
          app: helmapp
      template:
        metadata:
          labels:
            helm.sh/chart: app-chart-0.1.0
            app.kubernetes.io/name: app-chart
            app.kubernetes.io/instance: release-name
            app.kubernetes.io/version: "1.16.0"
            app.kubernetes.io/managed-by: Helm
            app: helmapp
        spec:
          serviceAccountName: release-name-app-chart
          securityContext:
            {}
          containers:
            - image: "tomcat:latest"
              name: app-chart
              ports:
                - containerPort: 8080
                  protocol: TCP
              resources:
                {}
    

    This gives me a pod like this:

    $ kubectl get pods
    NAME                                      READY   STATUS    RESTARTS   AGE
    release-name-app-chart-64c7c5d558-4j4rd   1/1     Running   0          94s
    

    And also a service:

    $ kubectl get svc
    NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    app-service   LoadBalancer   10.105.50.32   127.0.0.1     8080/TCP   5m17s
    kubernetes    ClusterIP      10.96.0.1      <none>        443/TCP    7m33s
    

    The "EXTERNAL-IP" field value of 127.0.0.1 for app-service will only appear if minikube tunnel successfully started up

    If we kubectl log the new pod we can see some details:

    $ kubectl logs -f release-name-app-chart-64c7c5d558-4j4rd
    ...
    08-Mar-2024 18:00:29.866 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.1.19]
    08-Mar-2024 18:00:29.960 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
    08-Mar-2024 18:00:29.971 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [289] milliseconds
    

    Now, if we (from the host machine) try to curl localhost on port 8080, that should go through the tunnel and access the service if everything is setup right

    $ curl localhost:8080
    <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/10.1.19</h3></body></html>
    

    Okay, so we get a 404, but look at the last bit:

    <h3>Apache Tomcat/10.1.19</h3>

    And compare that with the log message:

    08-Mar-2024 18:00:29.866 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.1.19]

    This confirms your config is right and you’ve mapped everything correctly.

    The 404 you’re getting would imply your server is not handling the requests right.

    Finally,

    My ingress:

    # Source: app-chart/templates/tests/test-connection.yaml
     .....
    

    This is not an ingress. What you have inside this file is a Helm test hook (https://helm.sh/docs/helm/helm_test/)

    An ingress is something completely different and you can read about that here: https://kubernetes.io/docs/concepts/services-networking/ingress/

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search