skip to Main Content

I am required by our client to run Apache Kafka in linux container on Windows Server 2019 with LCOW. I am using docker-compose to bring up two containers and this is my docker-compose.yml file:

version: "3"

services:

  zookeeper:
    image: 'bitnami/zookeeper:latest'
    container_name: test-zoo

    ports:
      - '2181:2181'
    volumes:
      - type: bind
        source: C:\test\persist
        target: /bitnami
    environment: 
      - ALLOW_ANONYMOUS_LOGIN=yes

  kafka:
    image: 'bitnami/kafka:latest'
    container_name: test-kafka
    deploy:
      resources:
        limits:
          memory: 2G
    ports:
      - '9092:9092'
    volumes:
      - type: bind
        source: C:\test\persist
        target: /bitnami
    environment:
      - KAFKA_BROKER_ID=1311
      - KAFKA_CFG_RESERVED_BROKER_MAX_ID=1000000
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092    
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092    
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - KAFKA_CFG_LOG_DIRS=/bitnami/kafka/logs 
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper

If I remove configuration concerning volumes the containers will work seamlessly and I can communicate with them without issues. Caveat is that I need persistent storage to save queues current status of both kafka and zookeeper. That’s the reason why I created volumes to persist storage on local drive on Windows Server.

If I delete those local directories, when bringing docker up with docker-compose the directories are recreated – so it seems that the configuration is good, but it obviously there is some issue when writing data from inside container because this is where the things start to go wrong. If I bring containers down, kafka container won’t start up anymore until I again delete the directories on local disk – they are almost empty, just few small files but not all the files from inside container.

I found this solution here: https://stackoverflow.com/a/56252052/6705092 but it is meant for docker-desktop that I am not allowed to use – just pure cli and docker-compose. This article basically says that you need to share this volumes inside docker-desktop, and when I do this everything works well.

So, the question is there a way to simulate same action (Share Volumes) from docker-desktop into docker-compose? Maybe some hidden unknown configuration switch or something else?

EDIT:

As requested in comments, this is the docker inspect of bitnami-kafka container under docker-desktop for volume sharing, where file persistance works well:

 "Mounts": [
        {
            "Type": "bind",
            "Source": "C:/dokit/persist",
            "Destination": "/bitnami",
            "Mode": "",
            "RW": true,
            "Propagation": "rprivate"
        }
    ]

I also learned somewhere that docker-desktop under Windows uses FUSE as a filesharing system, but I can’t replicate this on docker-host.

2

Answers


  1. Not sure about LCOW, but try using a Docker volume rather than a directory mount

    # zookeeper
        volumes:
          - 'zookeeper_data:/bitnami/zookeeper'
    # kafka 
        volumes:
          - 'kafka_data:/bitnami/kafka'
    
    volumes:
      zookeeper_data:
        driver: local
      kafka_data:
        driver: local
    

    This is copied from their compose file – https://github.com/bitnami/bitnami-docker-kafka/blob/master/docker-compose.yml

    Login or Signup to reply.
  2. There are 2 possible options:

    Create a Environment Variable on the Windows Server

    Variable Value
    COMPOSE_CONVERT_WINDOWS_PATHS 1 or true

    Create a .env file at same level as docker-compose.yml to make it portable project/product using the same variable

    COMPOSE_CONVERT_WINDOWS_PATHS=1

    With this variable Docker Compose performs path conversion from Windows-style to Unix-style in volume definitions.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search