I am currently working on a FPV robotics project that has two servers, flask/werkzeug and streamserver, serving http traffic and streaming video to an external web server, located on a different machine.
The way it is currently configured is like this:
- http://1.2.3.4:5000 is the "web" traffic (command and control) served by flask/werkzeug
- http://1.2.3.4:5001 is the streaming video channel served by streamserver.
I want to place them behind a https reverse proxy so that I can connect to this via https://example.com where "example.com" is set to 1.2.3.4 in my external system’s hosts file.
I would like to:
- Pass traffic to the internal connection at 1.2.3.4:5000 through as a secure connection. (certain services, like the gamepad, won’t work unless it’s a secure connection.)
- Pass traffic to 1.2.3.4:5001 as a plain-text connection on the inside as "streamserver" does not support HTTPS connections.
. . . such that the "external" connection (to ports 5000 and 5001 are both secure connections as far as the outside world is concerned, such that:
[external system]-https://example.com:5000/5001----nginx----https://example.com:5000
---http://example.com:5001
http://example.com:5000 or 5001 redirects to https.
All of the literature I have seen so far talks about:
- Routing/load-balancing to different physical servers.
- Doing everything within a Kubernates and/or Docker container.
My application is just an every-day plain vanilla server type configuration, and the only reason I am even messing with https is because of the really annoying problems with things not working except in a secure context which prevents me from completing my project.
I am sure this is possible, but the literature is either hideously confusing or appears to talk to a different use case.
A reference to a simple how-to would be the most usefull choice.
Clear and unambiguous steps would also be appreciated.
Thanks in advance for any help you can provide.
2
Answers
First, credit where credit is due: @AnthumChris's answer is essentially correct. However, if you've never done this before, the following additional information may be useful:
There is actually too much information online, most of which is contradictory, possibly wrong, and unnecessarily complicated.
Installing nginx:
Configuring the systems using nginx and connecting to it:
Note: This is a special case unique to my use-case as this is running on a stand-alone robot for development purposes and my domain is not a "live" domain on a web-facing server. It is a "real" domain with a "real" and trusted certificate to avoid browser warnings while development progresses.
Configuring nginx:
/etc/nginx/sites-available
and create a symlink to that file in/etc/nginx/sites-enabled
nginx -T
is your friend! You can use this to "test" your configuration for problems before you try to start it.sudo systemctl restart nginx
will attempt to restart nginx, (which as you begin configuration, will likely fail.)sudo systemctl status nginx.service > ./[path]/log.txt 2>&1
is also your friend. This allows you to collect error messages at runtime that will prevent the service from starting. In my case, the majority of the problems were caused by other services using ports I had selected, or silly mis-configurations.sudo netstat -tulpn | grep nginx
to make sure it's listening on the correct ports.Troubleshooting nginx after you have it running:
SSL certificates:
cat mycert.crt bundle.file > combined.crt
to create it.Ultimately I ended up with the following configuration file:
Hopefully this will help the next person who encounters this problem.
This minimal config should provide public endpoints:
http://example.com/* => https://example.com/*
https://example.com/stream => http://1.2.3.4:5001/
https://example.com/* => https://1.2.3.4:5000/