skip to Main Content

I’m running nginx on docker and it is currently serving a webpage with SSL on let’s say https://example.com
I’ve now created another set of containers that provide their own web server and it is available on port 8080 locally, and I want to be able to reach it in https://example.com/new_service

I’ve tried adding a simple proxy_pass of the /new_service/ location but I get a 502 Bad Gateway error and the nginx logs show the following:

2022/04/12 22:27:12 [error] 32#32: *19 connect() failed (111: Connection refused) while connecting to upstream, client:8.8.8.8, server: example.com, request: "GET /new_service HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
2022/04/12 22:27:12 [warn] 32#32: *19 upstream server temporarily disabled while connecting to upstream, client: 8.8.8.8, server: example.com, request: "GET /new_service/ HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
2022/04/12 22:27:12 [error] 32#32: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 8.8.8.8, server: example.com, request: "GET /new_service/ HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
2022/04/12 22:27:12 [warn] 32#32: *19 upstream server temporarily disabled while connecting to upstream, client: 8.8.8.8, server: example.com, request: "GET /new_service/ HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
8.8.8.8 - - [12/Apr/2022:22:27:12 +0000] "GET /new_service/ HTTP/1.1" 502 157 "-" "My Browser" "-"

My current configuration is:

server {
listen 443;
server_name example.com;
ssl_certificate /etc/nginx/certs/example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/example.com/privkey.pem;
root /var/www/html/;
client_max_body_size 1000M; # set max upload size
fastcgi_buffers 64 4K;
index index.php;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
add_header Strict-Transport-Security "max-age=15552000; includeSubdomains; ";

location = /robots.txt {
  allow all;
  log_not_found off;
  access_log off;
}

location ~ ^/(data|config|.ht|db_structure.xml|README) {
  deny all;
}

location ~ /(conf|bin|inc)/ {
    deny all;
}

location ~ /data/ {
    internal;
}

location /new_service/ {
  rewrite ^/new_service/?(.*) /$1 break;
  proxy_pass http://localhost:8080/;
}

location / {
  rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
  rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
  rewrite ^(/core/doc/[^/]+/)$ $1/index.html;
  try_files $uri $uri/ index.php;
}

location ~ ^(.+?.php)(/.*)?$ {
  try_files $1 = 404;
  include fastcgi_params;
  fastcgi_param SCRIPT_FILENAME $document_root$1;
  fastcgi_param PATH_INFO $2;
  fastcgi_param HTTPS on;
  #fastcgi_pass 127.0.0.1:9000;
  fastcgi_pass php:9000;
  # Or use unix-socket with 'fastcgi_pass unix:/var/run/php5-fpm.sock;'
  #fastcgi_pass unix:/run/php/php7.3-fpm.sock;
}

location ~* ^.+.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
  expires 30d;
  # Optional: Don't log access to assets
  access_log off;
  }
}

I imagine it must be common practice to use nginx to direct several locations to different local containers, but I haven’t been able to find good guidance on this. Any insight is greatly appreciated.

2

Answers


  1. It sounds like to me that the new docker container isn’t allowing you through it’s firewall or you haven’t passed the ports through to the host

    Login or Signup to reply.
  2. Please give docker config for a tailored answer.
    With guessing: If your containers does not use host network but bridge network (which is default), localhost points to the localhost of the nginx container and not your host system.

    # Using bridge network, if omitted will be same behavior
    services:
      service_name:
        ports:
          - "8080:8080" # Portmapping, maps from Container to HOST
        networks:
          network_name:
    networks:
      network_name:
    

    With that setting nginx running on host or using network_mode: "host" can reach your service @ localhost:8080. But if nginx is running in a container like described above, nginx CAN NOT REACH this service like that. Inter-container-communication is using an own (virtual) network with each container as "network adapter" in this network. Therefore with an own IP-Address. And for easy handling there is also a DNS-Server running, resolving the network aliases to IPs.

    Therefore use proxy_pass http:\DNS-NAME:8080 or proxy_pass http:\DOCKER-CONTAINER-IP:8080 to reach the container inside docker network. Use docker inspect CONTAINER to determine this:

    ...
     "NetworkSettings": {
    "Networks": {
                    "NAME": {
                       "Aliases": [
                            "c4675dda79be" # <-- this
                        ],
                        
                        "IPAddress": "172.18.0.2", # <-- or that
                     
                    }
    

    Aliases are the DNS-Names and UID is used by default, further can be set by

    docker run --net-alias
    docker network connect --alias
    # docker-compose:
    services:
      service_name: # <- this is used as an alias (=DNS-Name)
        networks:
          network_name:
            aliases:
              - alias1 # <- additional alias
    

    As long as it is possible use this network segregation (by default) and not "host-mode" for your containers due to strange side effects and security reasons and so on. Therefore you do not need port-mapping for your services hidden behind nginx at all. Because it does not need to be published on host (maybe except debugging and development), so our service is more secured, because nginx just forwards stuff to your service you allowed and not everything accessing your host on a given port.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search