In front of my web servers and Docker applications I’m running Traefik to handle load balancing and reverse-proxy. In this specific case Magento 2 is running on another host in the same private network as the Traefik host.
- Traefik: 192.168.1.30
- Magento: 192.168.1.224
Traffic is coming into the firewall on port 80/443 and forwarded to Traefik which forwards the request based on the domain name (in this case exampleshop.com).
My Traefik configuration looks like this:
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[backends]
[backends.backend-exampleshop]
[backends.backend-exampleshop.servers.server1]
url = "http://192.168.1.224:80
passHostHeader = true
[frontends]
[frontends.exampleshop]
backend = "backend-exampleshop"
[frontends.exampleshop.routes.hostname]
rule = "Host:exampleshop.com"
For regular websites above configuration always worked as expected (a working HTTPS connection with valid Let’s Encrypt cert) but in this Magento 2 case it results in:
ERR_TOO_MANY_REDIRECTS
Therefore I’m unable to reach both my homepage as well as my admin page. Looking at the Database records I’ve configured both my unsecure as secure URL as https://exampleshop.com to avoid redirect errors.
Apache is listening fine on port 80, and when contacted directly (by changing my hosts file) the page gets displayed just fine over HTTP.
What am I missing here?
4
Answers
Actually, the config was completely valid but Cloudflare's crypto/SSL settings were set to Flexible instead of Full; causing a loop.
I suppose that 192.168.1.224 is the IP (local) where Traefik is installed.
entryPoints.http
:address = ":80"
==address = "0.0.0.0:80"
entryPoints.https
(because https == port 443)frontends.example1
(becauserule = "Host:exampleshop.com"
)backend-example1
:server = "http://192.168.1.224:80"
entryPoints.http
because:80
== http://192.168.1.224:80entryPoints.https
Try to change the port of your local application.
Full
(if enabled)I run into this as well, but I’ve found I have to add this:
In our kubernetes ingress manifests and it fixes it.