skip to Main Content

I’m developing a web server in Rust, and it works fine using just Docker and defining two services in compose. As follows:

x-common_auth_service: &common_auth_service
  container_name: auth
  build: 
    context: ./auth
    dockerfile: Dockerfile
  network_mode: host
  restart: always
  deploy:
    resources:
      limits:
        memory: 40M
      reservations:
        memory: 20M
  depends_on:
    - redis

services:
  
  auth:
    <<: *common_auth_service
    container_name: auth
    environment: 
      APP_PORT: 3002 //specific port
  
  auth2:
    <<: *common_auth_service
    container_name: auth2
    environment: 
      APP_PORT: 3003 //specific port

But now I need to implement these services using Docker Swarm.

Here I came across the first problem when using replicas.

  compose.yml
  auth_service:
    image: service/auth_app_rust
    build: 
      context: ./auth
      dockerfile: Dockerfile
    networks:
      - network_overlay //I was previously a host, hence the problem
    ports:
      - "5000-5001:3002"
    deploy:
      mode: replicated
      replicas: 2

  nginx:
    image: nginx:latest
    volumes:
      - ./nginx/auth.conf:/etc/nginx/nginx.conf
    networks:
      - host
    depends_on:
      - auth_service

nginx.conf

upstream auth_server {
    server 127.0.0.1:5000;
    server 127.0.0.1:5001;
    keepalive 200;
}

server {
    listen 9999;
    location / {
        proxy_buffering off;
        proxy_set_header Connection "";
        proxy_http_version 1.1;
        proxy_set_header Keep-Alive "";
        proxy_set_header Proxy-Connection "keep-alive";
        proxy_pass http://auth_server;
    }
}

Services that previously defined ports manually used the same port across replicas, so I decided to use "port range mapping".

And at first it seemed to work fine, but I noticed that Docker Swarm doesn’t assign each port to the container, it actually balances between the replicas.

In other words, if I run:

curl http://localhost:5000 

It won’t necessarily run in container 1.

Although I have found it very convenient to scale services using replicas, and the model also makes it easy to upgrade services, how do I solve this problem?

Because I would still like to use replicas, instead of defining several services, but I would not like to use the Docker Swarm Load Balancer, as I already use it through Nginx.

In short, I want to scale the number of containers running the web server in Rust, without having to create several of them in docker-compose, and also not use the load balancer, as the load balancer alignment would be bad.

2

Answers


  1. You can try to use a combination of Docker Swarm’s replicas feature and the ports feature, along with some port mapping:

    docker-compose.yml

    version: '3'
    services:
      auth_service:
        image: service/auth_app_rust
        build:
          context: ./auth
          dockerfile: Dockerfile
        networks:
          - network_overlay
        ports:
          - "5000-5001:3002"
        deploy:
          mode: replicated
          replicas: 2
          endpoint_mode: dnsrr
    
    Here, I used the dnsrr feature, which allows each replica to have its own DNS entry so it does not have to depend on the load balancer.
    

    When you run docker stack deploy, it creates 2 replicas of the auth_service container with each listening between port 5000-5001.

    You can then use Nginx to have the traffic proxied:

        upstream auth_server {
            server auth_service-0:5000;
            server auth_service-1:5001;
            keepalive 200;
        }
    

    Let me know if this helps

    Login or Signup to reply.
  2. Its not clear where nginx is running. Given you are using 127.0.0.1 rather than docker.host.local it seems that nginx is NOT running in a container itself. You also talk about using docker swarm.

    This means you ARE, definitionally, using the ingress overlay network which loadbalances to service containers.
    Just use "127.0.0.1:5000" and let docker deal with finding the correct container.

    Alternatively, if you are runnning on a multi node swarm, then change the auth service:

    services:
      auth_service:
        ...
        ports:
          - target: 3000
            publish: 5000
            mode: host
        deploy:
          mode: global
    

    And then in the nginx.conf

    upstream auth_server {
        server server1-ip:5000;
        server server2-ip:5000;
        keepalive 200;
    }
    

    Finally, if nginx is actually running as a container, then drop the publish directives entirely and define a network that will connect nginx and the auth server. Use dockers service template syntax to give each instance a unique hostname – which will be published to containers attached to the same networks:

    networks:
      proxy:
        name: nginx
        attachable: true
    
    services:
      auth_service:
        hostname: auth-service-{{.Task.Slot}}
        networks:
        - proxy
    

    Attach nginx to the same network and use this config.

    upstream auth_server {
        server auth-server-1.nginx:3002;
        server auth-server-2.nginx:3002;
        keepalive 200;
    }
    

    This approach will make nginx senstive to the service running as nginx starts. to solve this you need to ensure nginx can resolve hosts at runtime:

    You need to include a resolver: 127.0.0.11 directive and use set $backend1 "http://auth-server-1:3002" to make the hostnames variables which forces nginx to resolve them at runtime. I don’t know how to combine that with upstream auth_server however so ymmv.

    
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search