skip to Main Content

long story short:
Container C on Docker Swarm host A can access Nginx (deploy mode:global) on Docker Swarm host B but not on Docker Swarm host A via Docker host’s IP, connection timed out.

Long story:
I have a Docker Swarm with 3 hosts. All Docker containers are running on a scope:swarm and driver:overlay network, called internal_network.
On the swarm I have also 3 Nginx (deploy mode: global) running. The Nginx have the default network set to internal_network but also ports configuration with target:80,published:80,protocol:tcp,mode:host (and other ports). The idea is that connections to the Docker swarm hosts are forwarded to the Nginx containers and then forwarded (reverse-proxied) to the Docker containers running on the swarm such as GitLab, Mattermost, and others.
Moreover, the Docker swarm hosts have keepalived configured to share the same IP (fail over) – so no matter what Docker host this shared IP is assigned to, there is always an Nginx running to accept incoming requests.
I am using Oracle Linux 8 (kernel 5.4.17 el8uek) and Docker 20.10.12. Docker is configured with icc: false and userland-proxy: false.

In the following example addr.foo resolves to shared ip.

What works:

  • The shared IP is properly shared between the Docker hosts, once the IP owning host goes down, another takes over the shared IP – however, the problem seems not to be related to keepalived as it occurs with the Docker hosts’ IPs too.
  • From external clients it is possible to connect to Nginx (on shared IP or Docker host IP) and being reverse-proxied to a Docker container such as GitLab or Mattermost
  • There is also a PostgreSQL running on the same stack and internal_network and Mattermost can communicate with that PostgreSQL instance on the internal_network.
  • On any Docker swarm host it is possible to run curl https://addr.foo and curl https://<shared ip> and to access Nginx and the reverse-proxied Docker container
  • On any Docker swarm host it is possible to run curl https://<host ip> and access Nginx and the reverse-proxied Docker container
  • From within a Docker container (e.g. Nginx, GitLab, Mattermost) it is possible to run curl https://addr.foo or curl https://<shared IP> when the shared IP is not hosted by the Docker host that is hosting the Docker container itself.

What does not work:

  • From within a Docker container (e.g. Nginx, GitLab, Mattermost) it is not possible to run curl and point to the Docker swarm host that is hosting the container. Curl (the container, docker) resolves the IP of its own Docker swarm host (e.g. curl https://<Docker host name>) which is correct but then the connection times out.
  • From within a Docker container ([…]) it is not possible to run curl and point to the shared IP when the shared IP is hosted by the Docker host that is running the container. The curl connection times out when accessing the containers Docker host.

So from inside a container it is not possible to connect to the the containers Docker host’s IP but to other Docker hosts’ IPs. The network interface ens192 on all Docker hosts is in firewall-zone public with all necessary ports open, external access works.

So my problem is: From within a Docker container it is not possible to establish a connection to the Docker host that is hosting the Docker container but it is possible to connect to another host.

On host docker host 1 with addr.foo resolving to docker host 2:

docker exec -it <nginx container id> curl https://addr.foo
[...] valid response
docker exec -it <nginx container id> curl https://<docker host 2>
[...] valid response
docker exec -it <nginx container id> curl https://<docker host 1>
connection timed out

Why do I need it:
Mattermost authenticates users via GitLab. Therefore, Mattermost needs to connect to GitLab. When Mattermost and GitLab are running on the same Docker swarm host, Mattermost cannot connect to GitLab.

What I do not want to do:
Restrict GitLab and Mattermost to not run on the same swarm host.

I also tried to move interface docker_gwbridge to firewall-zone trusted which led to the problem that the Docker containers did not start up.

I hope that this is enough information to get the idea.

3

Answers


  1. Chosen as BEST ANSWER

    Ok, found the answer here I guess: Docker Userland Proxy.

    In the previous section we identified two scenarios where Docker cannot use iptables NAT rules to map a published port to a container service:

    • When a container connected to another Docker network tries to reach the service (Docker is blocking direct communication between Docker networks);

    • When a local process tries to reach the service through loopback interface.

    This is what userland-proxy is for and setting it to true (default) enables the desired behavior.


  2. When communicating between containers you use the service name of the docker service not the host IP .

    Try from cli of one container ping the other containers based on the service name. If no reply then they are not on the same overlay network.

    Login or Signup to reply.
  3. Faced a similar problem. In my case, nginx did not correctly determine the ip address of the container. An explicit indication of the nginx’s directive helped:

    resolver 127.0.0.11 ipv6=off;
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search