skip to Main Content

In system1(i.e Host name of Master node), the docker is started using

docker swarm init

And later the Compose file available in system1 (*.yml) are deployed using

docker stack deploy –compose-file file_1.yml system1

docker stack deploy –compose-file file_2.yml system1

docker stack deploy –compose-file file_3.yml system1

Next in system2 (i.e Host name of Worker node),

Will join the manager node (system1) using join –token command.And using below mentioned command,and later copy the output of that command and join the manager node.

docker swarm join-token worker

And once ran output of the above command in system2.Was able to successfully join the manager node.

Also cross verified by using ,

docker node ls

And I could see both manager node and worker in Ready and active state.

In my case I’m using worker node(system2) for failover .

Now that I have similar compose files (*.yml files) in system2.

How do I get that deployed in docker swarm ?

Since system2 is worker node, I cannot deploy in system2.

2

Answers


  1. At first I’m not sure what do you mean by

    In my case I’m using worker node(system2) for failover .

    We are running Docker Swarm in production and the only way you can achieve failover with managers is to use more of them. Note because Docker Swarm uses etcd and that uses quorum, go with the rule of 1,3,5 …

    As for deployments from non-manager nodes, it is not possible to do so in Docker Swarm unless you use a management service which has a docker socket proxy and it can work with it through a service (service will be running on the manager and since it all lives inside Docker Swarm you can then invoke the calls from the worker.).

    But there is no way to directly deploy or administrate the swarm from the worker node.

    Login or Signup to reply.
  2. Some things:

    First:

    Docker contexts are used to communicate with a swarm manager remotely so that you do not have to be on the manager when executing docker commands.

    i.e. to deploy remotely to a swarm you could create then use a context like this:

    docker context create swarm1 --docker "host=ssh://user@node1"
    docker --context swarm1 stack deploy --compose-file stack.yml stack1
    

    2nd:
    Once the swarm is set up, you always communicate with a manager node, and it orchestrates the deployment of the service to available worker nodes. In the case that worker nodes are added after services are deployed docker will not move tasks to the worker nodes until new deployments are performed as it prefers to not interrupt running tasks. The goal is eventual balance. If you want to force a docker to rebalance to consider the new worker node immediately, then just redeploy the stack, or

    docker service update --force some-service

    3rd:
    To control which worker nodes services run tasks on you can use placement constraints and node labels.

    docker service create --constraint node.role==worker ... would only deploy onto nodes that have the worker role (are not managers)

    or
    docker service update --constraint-add "node.labels.is-nvidia-enabled==1" some-service would only deploy tasks to the node where you have explicitly labeled the node with the corresponding label and value.

    e.g. docker node update label-add is-nvidia-enabled=1 node1 node3

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search