skip to Main Content

I am currently working on a FPV robotics project that has two servers, flask/werkzeug and streamserver, serving http traffic and streaming video to an external web server, located on a different machine.

The way it is currently configured is like this:

I want to place them behind a https reverse proxy so that I can connect to this via https://example.com where "example.com" is set to 1.2.3.4 in my external system’s hosts file.

I would like to:

  • Pass traffic to the internal connection at 1.2.3.4:5000 through as a secure connection. (certain services, like the gamepad, won’t work unless it’s a secure connection.)
  • Pass traffic to 1.2.3.4:5001 as a plain-text connection on the inside as "streamserver" does not support HTTPS connections.

. . . such that the "external" connection (to ports 5000 and 5001 are both secure connections as far as the outside world is concerned, such that:

[external system]-https://example.com:5000/5001----nginx----https://example.com:5000
                                                        ---http://example.com:5001

http://example.com:5000 or 5001 redirects to https.

All of the literature I have seen so far talks about:

  • Routing/load-balancing to different physical servers.
  • Doing everything within a Kubernates and/or Docker container.

My application is just an every-day plain vanilla server type configuration, and the only reason I am even messing with https is because of the really annoying problems with things not working except in a secure context which prevents me from completing my project.

I am sure this is possible, but the literature is either hideously confusing or appears to talk to a different use case.

A reference to a simple how-to would be the most usefull choice.
Clear and unambiguous steps would also be appreciated.

Thanks in advance for any help you can provide.

2

Answers


  1. Chosen as BEST ANSWER

    First, credit where credit is due: @AnthumChris's answer is essentially correct.  However, if you've never done this before, the following additional information may be useful:

    1. There is actually too much information online, most of which is contradictory, possibly wrong, and unnecessarily complicated.

      • It is not necessary to edit the nginx.conf file.  In fact, that's probably a bad idea.
      • The current open-source version of nginx can be used as a reverse proxy, despite the comments on the nginx web-site saying you need the Pro version.  As of this instant date, the current version for the Raspberry Pi is 1.14.
      • After sorting through the reams of information, I discovered that setting up a reverse proxy to multiple backend devices/server instances is remarkably simple.  Much simpler than the on-line documentation would lead you to believe.
         
    2. Installing nginx:

      • When you install nginx for the first time, it will report that the installation has failed.  This is a bogus warning.  You get this warning because the installation process tries to start the nginx service(s) and there isn't a valid configuration yet - so the startup of the services fails, however the installation is (likey) correct and proper.
         
    3. Configuring the systems using nginx and connecting to it:
       
      Note: This is a special case unique to my use-case as this is running on a stand-alone robot for development purposes and my domain is not a "live" domain on a web-facing server.  It is a "real" domain with a "real" and trusted certificate to avoid browser warnings while development progresses.

      • It was necessary for me to make entries in the robot's and remote system's HOSTS file to automagically redirect references to my domain to the correct device, (the robot's fixed IP address), instead of directnic's servers where the domain is parked.
         
    4. Configuring nginx:

      • The correct place to put your configuration file, (on the raspberry pi), is /etc/nginx/sites-available and create a symlink to that file in /etc/nginx/sites-enabled
      • It does not matter what you name it as nginx.conf blindly imports whatever is in that directory.  The other side of that is if there is anything already in that directory, you should remove it or rename it with a leading dot.
      • nginx -T is your friend!  You can use this to "test" your configuration for problems before you try to start it.
      • sudo systemctl restart nginx will attempt to restart nginx, (which as you begin configuration, will likely fail.)
      • sudo systemctl status nginx.service > ./[path]/log.txt 2>&1 is also your friend.  This allows you to collect error messages at runtime that will prevent the service from starting.  In my case, the majority of the problems were caused by other services using ports I had selected, or silly mis-configurations.
      • Once you have nginx started, and the status returns no problems, try sudo netstat -tulpn | grep nginx to make sure it's listening on the correct ports.
         
    5. Troubleshooting nginx after you have it running:

      • Most browsers, (Firefox and Chrome at least) support a "developer mode" that you enter by pressing F-12.  The console messages can be very helpful.
         
    6. SSL certificates:

      • Unlike other SSL servers, nginx requires the site certificate to be combined with the intermediate certificate bundle received from the certificate authority by using cat mycert.crt bundle.file > combined.crt to create it.
         
    7. Ultimately I ended up with the following configuration file:

      • Note that I commented out the HTTP redirect as there was a service using port 80 on my device.  Under normal conditions, you will want to automatically re-direct port 80 to the secure connection.
      • Also note that I did not use hard-coded IP addresses in the config file.  This allows you to reconfigure the target IP address if necessary.
      • A corollary to that is - if you're redirecting to an internal secure device configured with the same certificates, you have to pass it through as the domain instead of the IP address, otherwise the secure connection will fail.
         
    #server {
    #   listen example.com:80;
    #   server_name example.com;
    #   return 301 https://example.com$request_uri;
    # }
    
    # This is the "web" server (command and control), running Flask/Werkzeug
    # that must be passed through as a secure connection so that the
    # joystick/gamepad works.
    #
    # Note that the internal Flask server must be configured to use a
    # secure connection too. (Actually, that may not be true, but that's
    # how I set it up. . .)
    #
    server {
       listen example.com:443 ssl;
       server_name example.com;
       ssl_certificate  /usr/local/share/ca-certificates/extra/combined.crt;
       ssl_certificate_key  /usr/local/share/ca-certificates/extra/example.com.key; 
       ssl_prefer_server_ciphers on;
    
       location / {
            proxy_pass https://example.com:5000;
    
            proxy_set_header        Host $host;
            proxy_set_header        X-Real-IP $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header        X-Forwarded-Proto $scheme;
       }
    }
    
    # This is the video streaming port/server running streamserver
    # which is not, and cannot be, secured.  However, since most
    # modern browsers will not mix insecure and secure content on
    # the same page, the outward facing connection must be secure.
    #
    server {
       listen example.com:5001 ssl;
       server_name example.com;
       ssl_certificate  /usr/local/share/ca-certificates/extra/combined.crt;
       ssl_certificate_key  /usr/local/share/ca-certificates/extra/www.example.com.key; 
       ssl_prefer_server_ciphers on;
    
    # After securing the outward facing connection, pass it through
    # as an insecure connection so streamserver doesn't barf.
    
       location / {
            proxy_pass http://example.com:5002;
    
            proxy_set_header        Host $host;
            proxy_set_header        X-Real-IP $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header        X-Forwarded-Proto $scheme;
       }
    }
    

     
    Hopefully this will help the next person who encounters this problem.

  2. This minimal config should provide public endpoints:

    1. http://example.com/* => https://example.com/*
    2. https://example.com/stream => http://1.2.3.4:5001/
    3. https://example.com/* => https://1.2.3.4:5000/
    # redirect to HTTPS
    server {
      listen      80;
      listen [::]:80;
      server_name example.com
                  www.example.com;
    
      return 301 https://example.com$request_uri;
    }
    
    server {
      listen      443 ssl http2;
      listen [::]:443 ssl http2;
      server_name example.com
                  www.example.com;
      ssl_certificate     /etc/nginx/ssl/server.cer;
      ssl_certificate_key /etc/nginx/ssl/server.key;
    
      location /stream {
        proxy_pass http://1.2.3.4:5001/;  # HTTP
      }
    
      # fallback location
      location / {
        proxy_pass https://1.2.3.4:5000/; # HTTPS
      }
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search