skip to Main Content

I am running a sample webapp python in Kubernetes.
I am not able to figure out how I make use of probes here.

  1. I want app to recover from broken states by automatically restarting pods

  2. Route the traffic to the healthy pods only.

  3. Make sure the database is up and running before the application (which in my case is Redis).

I have understanding of probes but not sure how/what values exactly to look for

My definition file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
     env: production
     app: frontend
spec:
  selector:
    matchLabels:
      env: production
      app: frontend
  replicas: 1
  template:
    metadata:
      name: myapp-pod 
      labels:
        env: production
        app: frontend
    spec:
      containers:
      - name: myapp-container
        image: myimage:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 5000

Now I am doing something like this

        readinessProbe:
          httpGet:
            path: /
            port: 5000
          initialDelaySeconds: 20
          periodSeconds: 5
        livenessProbe:
        httpGet:
          path: /
          port: 5000
        intialDelaySeconds: 10
        periodSeconds: 5

3

Answers


  1. In Kubernetes you have three kind of probes:

    • Liveness – is my application still running? If this fails the application is restarted
    • Readiness – is my application ready to serve traffic? If this fails the current pod does not get traffic (e.g. via a service).
    • Startup (since 1.18) – this probe type was introduces for applications that need a long time to start. The other two probe types start after the startup probe reports success the first time.

    So, in your case your application should provide one probe that checks if it still running and another if it can server traffic (also checking Redis). Be aware the liveness probes may be dangerous – maye start only with a readiness probe.

    If your app provides the health check (incl. Redis) under /healthz, this is how you would define your readiness probe.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp-deployment
      labels:
         env: production
         app: frontend
    spec:
      selector:
        matchLabels:
          env: production
          app: frontend
      replicas: 1
      template:
        metadata:
          name: myapp-pod 
          labels:
            env: production
            app: frontend
        spec:
          containers:
          - name: myapp-container
            image: myimage:latest
            imagePullPolicy: Always
            ports:
            - containerPort: 5000
            readinessProbe:
              httpGet:
                path: /healthz
                port: 5000
              initialDelaySeconds: 3
              periodSeconds: 3
    
    Login or Signup to reply.
  2. You need to define both

    • readinessProbe: allowing to tell whether your Deployment is ready to serve traffic at any point of time during its lifecycle. This configuration item can support different command but in your case, this would be an httpGet matching and endpoint that you would implement in your web-app (most modern stack define the endpoints by default so check check the documentation of whatever framework you are using). Note that the endpoint handler would need to check the readiness of any needed dependency, in your case you would need to check if http[s]://redis-host:redis-port responds with success
    • livenessProbe: allowing for the control plane to check continuously your pods health and to decide on actions needed to get the cluster to the desired state rolling out any pods failing to report being alive. This probe support different definitions as well and as for the readinessProbe most modern framework offer endpoints responding to such request by default

    Here down you can see a sample of both probes, for which you would have two respective HTTP endpoints within your web application:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp-deployment
      labels:
         env: production
         app: frontend
    spec:
      selector:
        matchLabels:
          env: production
          app: frontend
      replicas: 1
      template:
        metadata:
          name: myapp-pod 
          labels:
            env: production
            app: frontend
        spec:
          containers:
          - name: myapp-container
            image: myimage:latest
            imagePullPolicy: Always
            ports:
            - containerPort: 5000
          readinessProbe:
            httpGet:
              path: /health
              port: 5000
          livenessProbe:
            httpGet:
              path: /health
              port: 5000
    
    Login or Signup to reply.
  3. As we know, there are Liveness, Readiness and Startup Probes in Kubernetes. Check the official Kubernetes Documentation page on this topic.

    The Kubelet Node agent can perform these probes on running Pods using 3 different methods:

    • HTTP: The Kubelet probe performs an HTTP GET request against an endpoint (like /health), and succeeds if the response status is between 200 and 399
    • Container Command: The Kubelet probe executes a command inside of the running container. If the exit code is 0, then the probe succeeds.
    • TCP: The Kubelet probe attempts to connect to your container on a specified port. If it can establish a TCP connection, then the probe succeeds.

    I am running a sample webapp python in Kubernetes. I am not able to figure out how I make use of probes here.

    1. I want app to recover from broken states by automatically restarting pods. 2. Route the traffic to the healthy pods only.

    For this you should use both the readiness and liveness probes. They can both use the same probe method and perform the same check, but the inclusion of a readiness probe will ensure that the Pod doesn’t receive traffic until the probe begins succeeding.

    For example you can add minimal health endpoint for a flask python app like this:

    @app.route('/health')
    def return_ok():
        return 'Ok!', 200
    

    And specify your k8s liveness and readiness probe like this:

    ...
        spec:
          containers:
          - name: myapp-container
            image: myimage:latest
            imagePullPolicy: Always
            ports:
            - containerPort: 5000
            readinessProbe:                      # extra section for your Deployment 
              httpGet:
                path: /health
                port: 5000
              initialDelaySeconds: 20
              periodSeconds: 5
            livenessProbe:
              httpGet:
                path: /health
                port: 5000
              initialDelaySeconds: 10
              periodSeconds: 5                  # end of the section
    

    If you wish you can define a liveness probe command (replace the extra section):

    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 10
      periodSeconds: 5
    

    To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container and restarts it.

    Another option is to define a TCP liveness/readiness probe:

    livenessProbe:
      tcpSocket:
        port: 5000
      initialDelaySeconds: 10
      periodSeconds: 5
    

    This will attempt to connect to the your container on port 5000. If the probe succeeds, the Pod will be marked as ready.

    3. Make sure the database is up and running before the application (which in my case is Redis).

    And for checking health of Redis pods you can put a bash script file in your Redis pod to /health/ping_liveness_local.sh:

    #!/bin/bash
    [[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
    [[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
    response=$(
     timeout -s 3 $1 
     redis-cli 
      -h localhost 
      -p $REDIS_PORT 
      ping 
    )
    if [ $? == 124 ]; then
     echo "Timed out"
     exit 1
    fi
    responseFirstWord=$(echo $response | head -n1 | awk '{print $1;}')
    if [ "$response" != "PONG" ] && [ "$responseFirstWord" != "LOADING" ] && [ "$responseFirstWord" != "MASTERDOWN" ]; then
     echo "$response"
     exit 1
    fi
    

    and use this section for your redis pods:

    livenessProbe:
      exec:
        command: 
        - sh 
        - -c 
        - /health/ping_liveness_local.sh 5
      failureThreshold: 5
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 6
    

    If this seems too complicated, then I suggest installing Redis packaged by Bitnami. It is very easy to install it on a Kubernetes cluster using the Helm package manager.
    Bitnami guys configured all needed liveness and readiness probes for Redis.

    Regarding how to ensure that DB is up and running before your application, check this answer, please. This solution uses K8s Init Containers.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search