I’ve tried everything:
@Starlette:
routes = [
Mount("/static/", StaticFiles(directory=parent+fs+"decoration"+fs+"static"), name="static"),
Route(....),
Route(....),
]
@Uvicorn:
--forwarded-allow-ips=domain.com
--proxy-headers
@url_for:
_external=True
_scheme="https"
@nginx:
proxy_set_header Subdomain $subdomain;
proxy_set_header Host $http_host;
proxy_pass http://localhost:7000/;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect http://$http_host/ https://$http_host/;
include proxy_params;
server {
if ($host = sub.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name sub.domain.com;
return 404; # managed by Certbot
}
If I open a .css or .js link, nginx renders it to https.
When I allow Firefox to ignore the unsafe content, the whole page is rendered correctly at the production server.
Let’s encrypt works perfectly with the whole domain, no issues with the certificate.
2
Answers
The problem after all was the usage of * instead of "*" through bash.
The result was to have all the filenames returned at the FORWARDED_ALLOW_IPS parameter instead of the character "*".
I think uvicorn’s
--forwarded-allow-ips=domain.com
part needs to have the IP of your Nginx server because it’s what’s doing the forwarding. (ie change "domain.com" to IP of your nginx server) Note you can also use the environment variableFORWARDED_ALLOW_IPS=*
orFORWARDED_ALLOW_IPS=1.2.3.4
instead (useful if running uvicorn behind gunicorn behind nginx)For other readers landing here: I was having the same problem because I failed to configure my "server" config section to actually forward the
X-Forwarded-Proto
andX-Forwarded-For
headers so that uvicorn can get them. Here’s an example of what I needed: