skip to Main Content

I have three nodes in docker swarm (all nodes are manager)
I want to run zookeeper cluster on these three nodes

my docker-compose file

version: '3.8'
services:
  zookeeper1:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-1"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-1:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=1
      - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test
  zookeeper2:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-2"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-2:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=2
      - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test
  zookeeper3:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-3"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-3:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=3
      - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test

I use docker stack deploy to run, my expect is each zookeeper will run on different node, but sometimes one node will start two zookeeper conatiners

Does the docker stack deploy can have this feature??

thanks

2

Answers


  1. To start a service on each available node in your Docker Swarm cluster you need to run it in global mode.

    But, in your case because of the specific volumes for each Zookeeper you can use placement constraints to control the nodes a service can be assigned to. So you can add the following section to each Zookeeper service which will allow each instance to run on a different node:

    services:
      ...
      zookeeper-1:
       ...
       deploy:
          placement:
            constraints:
              - node.hostname==node1
    
    Login or Signup to reply.
  2. If you roll your zookeepers into a single service, then you can use max_replicas_per_node.

    Like this:

    version: "3.9"
    
    volumes:
      zookeeper:
        name: '{{index .Service.Labels "com.docker.stack.namespace"}}_zookeeper-{{.Task.Slot}}'
    
    services:
      zookeeper:
        image: zookeeper:latest
        hostname: zoo{{.Task.Slot}}
        volumes:
          - zookeeper:/conf
        environment:
          ZOO_MY_ID: '{{.Task.Slot}}'
          ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
          ALLOW_ANONYMOUS_LOGIN: 'yes'
        ports:
        - 2181:2181
        deploy:
          replicas: 3
          placement:
            max_replicas_per_node: 1
            constraints:
            - node.role==worker
    

    I have used the docker offical, rather than the bitnami image for demo purposes.

    Service templates are used to assign each replica a hostname of the form "zoo1"…"zoo3" so that, rather than 3 services, 1 service with 3 replicas can be used. This also means that port 2181 is published only, and dockers service mesh will loadbalance zookeeper clients to zookeeper instances automatically.

    As the original question included a unique volume per service, service template parameters are again used to assign a volume name of the form "stack_zookeeper_1". However, this is a config volume and probably needs to be shared?
    Also, as zookeeper tasks replicas are migrated between swarm nodes, the volume will be created empty on each swarm node if the default volume driver (local) rather than a swarm aware driver is being used.

    Finally, replicas and max_replicas_per_node ensure that 3 zoo tasks are started and don’t share nodes.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search