skip to Main Content

My use case

I am trying to service a containerized app on demand using kubernetes.

By on demand, I mean that there is an external app client that shows a list of datasets.
The desired feature is that, an authenticated user may open the dataset. Opening a dataset means in the background firing-up a new instance of the app that reads the dataset, passed as a parameter. Then exposing it, setup an ingress and return a unique URL for the user to visit.

For resource management, the instance shall be running when the user requests it and be terminated when not needed anymore.
i.e. first of user closing the window or maximum lifetime of x hours.

Current stage

  • The instances are fired up using the kubernetes (python) client API.
    • An instance is a deployment (replicas 1), a service and an ingress (nginx-ingress-controller).
    • Instances are stateless: dataset and appdata are coming from s3 bucket. Authentication and routing are delegated to the ingress/proxy.

So the service requirements are fulfilled. The user may visit the url, authenticate, use the features of the app and so on.

The challenge

Now comes the end of the lifecycle part. Conceptually I would like a way to trigger the termination of the deployment after some time (5-10 minutes) the user closed the window.

I have been browsing quite a bit of documentation on the kubernetes ecosystem and it seems there is no obvious built-in tool for this use case.

Leads:

Ingress

Initially I naively thought we could monitor the state of the connection with the ingress that could provide some flags directly in the kubernetes manifests in order to specify an automatic termination. It seems that haproxy-ingress controller has a way of monitoring active connections. However, even if I change of ingress controller, I did not find anything to schedule the deployment for termination.

Probes

Probes seem like the logical tool for this task. The front-end of the app supports JavaScript injection. The idea would be to inject a WebSocket, with an event listener on unload that would serve as a monitoring endpoint.
Apart from that, making large modifications on the initial app is off limits.

However, on probe check failure the behavior is to start a new pod.

-> In theory it should be possible to change my deployment for a job (that is meant to terminate) instead and set a restartPolicy: Never flag. But that would also cancel all useful restarts (like pod failure).

?> Generally speaking, should I be using a Job to run a service ? From the documentation it seemed more adapted for computing tasks. Although, it sounds better suited for the scheduling related issues. I’ve made some tests, and it is also possible to expose a job. What would be the other drawbacks of having the pod in a job instead of a deployment (or use a bareback pod) ?

Sidecar container

Ultimately I could do everything manually as it should also be possible to have a sidecar container, with a small server connecting to that WebSocket that would be in charge of triggering self deletion of the deployment using client API or any other solution.


So what would be the way to go to follow best practices ? This is my first kubernetes project and I have a hard time believing that there is no built-in mechanism to specify a termination policy for resources.

Thanks for the help. If needed, you may consult the full code here. Comments and anti-patterns notice are very welcome.

2

Answers


  1. I’d say that’s a classic operator job.

    The user action triggers the creation of an custom resource, the operator sets up everything based on that. Closing the session deletes the custom resource. Also a cronjob garbage collects all custom resources that are older than x.

    Writing an operator is actually quiet simple. You can leverage kubebuilder for that.

    Login or Signup to reply.
  2. The right way to do this would be to leverage the Kubrenetes Operator Pattern. The Operator framework is designed to imitate humans by decoupling the operational knowledge (deleting a deployment in your case) from application knowledge. Hence the name "Operator"

    Some Examples (details in link below)

    • The ability to deploy an application on demand;
    • Making a backup of an application state or restarting an application from a given backup;
    • Managing the update of an application with all its dependencies including new configuration settings and necessary database changes;
    • Exposing a service to applications that do not support Kubernetes APIs.

    https://www.cncf.io/blog/2022/06/15/kubernetes-operators-what-are-they-some-examples/

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search