I have three nodes in docker swarm (all nodes are manager)
I want to run zookeeper cluster on these three nodes
my docker-compose file
version: '3.8'
services:
zookeeper1:
image: 'bitnami/zookeeper:latest'
hostname: "zookeeper-1"
ports:
- '2181'
- '2888'
- '3888'
volumes:
- "zookeeper-1:/opt/bitnami/zookeeper/conf"
environment:
- ZOO_SERVER_ID=1
- ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- network_test
zookeeper2:
image: 'bitnami/zookeeper:latest'
hostname: "zookeeper-2"
ports:
- '2181'
- '2888'
- '3888'
volumes:
- "zookeeper-2:/opt/bitnami/zookeeper/conf"
environment:
- ZOO_SERVER_ID=2
- ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- network_test
zookeeper3:
image: 'bitnami/zookeeper:latest'
hostname: "zookeeper-3"
ports:
- '2181'
- '2888'
- '3888'
volumes:
- "zookeeper-3:/opt/bitnami/zookeeper/conf"
environment:
- ZOO_SERVER_ID=3
- ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- network_test
I use docker stack deploy to run, my expect is each zookeeper will run on different node, but sometimes one node will start two zookeeper conatiners
Does the docker stack deploy can have this feature??
thanks
2
Answers
To start a service on each available node in your Docker Swarm cluster you need to run it in
global
mode.But, in your case because of the specific volumes for each Zookeeper you can use
placement constraints
to control the nodes a service can be assigned to. So you can add the following section to each Zookeeper service which will allow each instance to run on a different node:If you roll your zookeepers into a single service, then you can use
max_replicas_per_node
.Like this:
I have used the docker offical, rather than the bitnami image for demo purposes.
Service templates are used to assign each replica a hostname of the form "zoo1"…"zoo3" so that, rather than 3 services, 1 service with 3 replicas can be used. This also means that port 2181 is published only, and dockers service mesh will loadbalance zookeeper clients to zookeeper instances automatically.
As the original question included a unique volume per service, service template parameters are again used to assign a volume name of the form "stack_zookeeper_1". However, this is a config volume and probably needs to be shared?
Also, as zookeeper tasks replicas are migrated between swarm nodes, the volume will be created empty on each swarm node if the default volume driver (local) rather than a swarm aware driver is being used.
Finally,
replicas
andmax_replicas_per_node
ensure that 3 zoo tasks are started and don’t share nodes.