skip to Main Content

I’ve built an application through Docker with a docker-compose.yml file and I’m now trying to convert it into deployment file for K8S.

I tried to use kompose convert command but it seems to work weirdly.

Here is my docker-compose.yml:

version: "3"
services:

  worker:
    build:
      dockerfile: ./worker/Dockerfile
    container_name: container_worker
    environment:
      - PYTHONUNBUFFERED=1
    volumes:
      - ./api:/app/
      - ./worker:/app2/

  api:
    build:
      dockerfile: ./api/Dockerfile
    container_name: container_api
    volumes:
      - ./api:/app/
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - "8050:8050"
    depends_on:
      - worker

Here is the output of the kompose convert command:

[root@user-cgb4-01-01 vm-tracer]# kompose convert
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host 
WARN Volume mount on the host "/var/run/docker.sock" isn't supported - ignoring path on the host 
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host 
WARN Volume mount on the host "/home/david/vm-tracer/worker" isn't supported - ignoring path on the host 
INFO Kubernetes file "api-service.yaml" created   
INFO Kubernetes file "api-deployment.yaml" created 
INFO Kubernetes file "api-claim0-persistentvolumeclaim.yaml" created 
INFO Kubernetes file "api-claim1-persistentvolumeclaim.yaml" created 
INFO Kubernetes file "worker-deployment.yaml" created 
INFO Kubernetes file "worker-claim0-persistentvolumeclaim.yaml" created 
INFO Kubernetes file "worker-claim1-persistentvolumeclaim.yaml" created 

And it created me 7 yaml files. But I was expected to have only one deployment file. Also, I don’t understand these warning that I get. Is there a problem with my volumes?

Maybe it will be easier to convert the docker-compose to deployment.yml manually?

Thank you,

2

Answers


  1. I guess this is fine:

    1. All your docker exposed ports are now kubernetes services
    2. Your volumes need PV and PVC, they are generated
    3. There is a deployment yaml for your API and WORKER service.

    This is how it should be usually.

    However if you have confusion in deploying these files; try –

    kubectl apply -f mymanifests/*.yaml – this will deploy all at once.

    Or if you just want a single fine , you can concatenate all these files with
    --------- one after other; which can be used to seperate multiple manifests but still have them in a single file. Something like –

    apiVersion.... deploymentfile....
    
    -------------
    
    apiVersion.... servicefile...... and so on...
    
    Login or Signup to reply.
  2. I’d recommend using Kompose as a starting point or inspiration more than an end-to-end solution. It does have some real limitations and it’s hard to correct those without understanding Kubernetes’s deployment model.

    I would clean up your docker-compose.yml file before you start. You have volumes: that inject your source code into the containers, presumably hiding the application code in the image. This setup mostly doesn’t work in Kubernetes (the cluster cannot reach back to your local system) and you need to delete these volumes: mounts. Doing that would get rid of both the Kompose warnings about unsupported host-path mounts and the PersistentVolumeClaim objects.

    You also do not normally need to specify container_name: or several other networking-related options. Kubernetes does not support multiple networks and so if you have any networks: settings they will be ignored, but most practical Compose files don’t need them either. The obsolete links: and expose: options, if you have them, can also usually be safely deleted with no consequences.

    version: "3.8"
    services:
      worker:
        build:
          dockerfile: ./worker/Dockerfile
        environment:
          - PYTHONUNBUFFERED=1
      api:
        build:
          dockerfile: ./api/Dockerfile
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        ports:
          - "8050:8050"
        depends_on:  # won't have an effect in Kubernetes,
          - worker   # but still good Docker Compose practice
    

    The bind-mount of the Docker socket is a larger problem. This socket usually doesn’t exist in Kubernetes, and if it does exist, it’s frequently inaccessible (there are major security concerns around having it available, and it would allow you to launch unmanaged containers as well as root the node). If you need to dynamically launch containers, you’d need to use the Kubernetes API to do that instead (look at creating one-off Jobs). For many practical purposes, having a long-running worker container attached to a queueing system like RabbitMQ is a better approach. Kompose can’t fix this architectural problem, though, you will have to modify your code.

    When all of this is done, I’d expect Kompose to create four files, with one Kubernetes YAML manifest in each: two Deployments, and two matching Services. Each of your Docker Compose services: would get translated into a separate Kubernetes Deployment, and you need a paired Kubernetes Service to be able to connect to it (even from within the cluster). There are a number of related objects that are often useful (ServiceAccounts, PodDisruptionBudgets, HorizontalPodAutoscalers) and a typical Kubernetes practice is to put each in its own file.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search