skip to Main Content

TL;DR
How can I setup a light-weight web server to execute external programs to handle REST requests?

The long version:
We have a set of services and databases deployed in Kubernetes via Helm. There are some executables that perform maintenance, cleanup, backup, restore etc that I need to execute (some on-demand & some periodically).
I want to park a small, light-weight web server somewhere mounted with access to the binaries and execute them when REST requests are handled.

  • server needs to have a small memory footprint
  • traffic will be really light (like minutes between each request)
  • security is not super important (it will run inside our trusted zone)
  • server needs to handle GET and POST (i.e. passing binary content TO & FROM external program)

I’ve glanced at lighttpd or nginx with CGI modules but I’m not experienced with those.
What do you recommend? Do you have a small example to show how to do it?

2

Answers


  1. Here’s k8s native approach:

    ... a set of services and databases deployed in Kubernetes... some executables that perform maintenance, cleanup, backup, restore etc...some on-demand & some periodically

    If you can bake those "executables" into an image, you can run these programs on-demand as k8s job, and schedule repeating job as k8s cronjob. If this is possible in your context then you can create a k8s role that has just enough right to call job/cronjob api, and bind this role to a dedicated k8s service account.

    Then you build a mini web application using any language/framework of your choice, run this web application on k8s using the dedicated service account, expose your pod as service using NodePort/LoadBalancer to receive GET/POST requests. Finally you make direct api call to k8s api-server to run jobs according to your logic.

    Login or Signup to reply.
  2. I think I’d make a small service. I’d choose Node, since that’s something I’m familiar with, but lots of languages have adopted this way of setting up a server, so choose something you like.

    index.js

    const express = require('express')
    const { exec } = require('child_process')
    const app = express()
    const port = 3000
    
    app.get('/os-release', (req, res) => {
      exec('cat /etc/os-release', (error, stdout, stderr) => {
        res.send(stdout)
      })
    })
    
    app.listen(port, () => {
      console.log(`Listening on port ${port}`)
    })
    

    package.json

    {
      "name": "shell-exec",
      "version": "1.0.0",
      "description": "",
      "main": "index.js",
      "scripts": {
        "test": "echo "Error: no test specified" && exit 1"
      },
      "author": "",
      "license": "ISC",
      "dependencies": {
        "child_process": "^1.0.2",
        "express": "^4.17.1"
      }
    }
    

    Dockerfile

    FROM node:alpine
    EXPOSE 3000
    WORKDIR /app
    COPY package.json ./
    RUN npm install
    COPY . ./
    CMD ["node", "index.js"]
    

    I should be relatively easy to make POST requests that take input parameters if you need to do that. I also like that you can send the output of the command back to the caller.

    If you build and run this, you can then hit the /os-release endpoint and get data back from the cat /etc/os-release command.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search