skip to Main Content

I am learning how Kubernetes works, with the objective of migrating a "Virtual Machine" environment to a "Kubernetes Container" environment.
The final objective is to be able to absorb peak loads (auto scaling).

Currently, we have a PHP web application (Symfony) running on multiple Debian VMs, which we call "Workers".
Each VM is a "worker", it owns the PHP source code, NGINX and PHP-FPM.

PHP sessions are not a problem because they are stored in a cluster-redis.
The databases is stored in a cloud-provider.
The file storage is also in cloud-provider, S3 protocol.

With this environment when we push code in production, we do a git pull via Ansible on all the "workers", and it works fine, nice !

So now with Kubernetes ;
I am a little confused to transcribe this operations in Kubernetes, more particularly the "source code" part.
I have already done this;

  • Create PHP-FPM / NGINX services
  • Create the PHP-FPM / NGINX deployments, with the nginx vhost stored in a configmap.

All this is working fine, my PHP / NGINX environment is up and running fine.

Now my biggest problem, how do I get the source code to run in my pods?

I thought about using an init container, which would make a "git clone" of my code when starting the pods, in a /code volume.
However, once the pods are up and running, how can I git pull them, kill them and reinitialize? Poor solution ?

The other idea I have is to do a Kubernetes Jobs, which I run every time it is necessary to pull new code?

Can you advise me ?
What is the best solution for "git pull" code in running containers? As I did before with my VMs, via a git pull from Ansible on all VMs.

2

Answers


  1. You would not update the code in a container itself. Instead you would build a new image from the code every time you release (or let your CI pipeline do that, assuming you have one). Then in a second step, you would tell kubernetes to recreate the container with the new image you just built.

    Login or Signup to reply.
  2. Your images should be self-contained, and pushed to a registry. In pure Docker land, you should be able to run them without mounting the source code into them (no docker run -v or Compose volumes: options). This means you need to COPY the source code into the image in a Dockerfile.

    When you build an image, it’s also very good practice to give every build a unique tag. The source control commit ID is a good choice; a date stamp will also be pretty unique and is easier for an operator to understand.

    docker build -t my/application:20210403 .
    # run this _without_ separately injecting the application code
    docker run -d -p 1234:1234 my/application:20210403
    docker push my/application:20210403
    

    In your Kubernetes Deployment spec there will be a line that references the image:, like

    image: my/application:20210403
    

    When you have a new build of your application, build and push it as above, and change the tag in the image:. (There is an imperative kubectl command that can do it; tools like Helm can inject it based on deploy-time parameters; or you can edit the YAML file, commit it to source control, and kubectl apply -f it.) When the image: changes (and it must actually be a different string from before) Kubernetes will launch new pods with the new images, and then tear down the old pods.

    This approach has a couple of advantages over what you describe. In a multi-node or cloud-hosted environment, you don’t need to manually copy code into the cluster; the image pull will do it for you. If you discover something has gone wrong in a build, it’s very easy to set the image: back to yesterday’s build and go back. This also uses only standard Kubernetes features and can literally be half the YAML size of a configuration that tries to separately inject the source code.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search