skip to Main Content

So, here is the problem I ran into, I am trying to build a very small-scale MVP app that I will be releasing soon. I have been able to figure out everything from deploying the flask application with Dokku (I’ll upgrade to something better later) and have been able to get most things working on the app including S3 uploading, stripe integration, etc. Here is the one thing I am stuck on, how do I generate SSL certs on the fly for customers and then link everything back to the Python app? Here are my thoughts:

I can use a simple script and connect to the Letsencrypt API to generate and request certs once domains are pointed to my server(s). The problem I am running into is that once the domain is pointed, how do I know? Dokku doesn’t connect all incoming requests to my container and therefore Flask wouldn’t be able to detect it unless I manually connect it with the dokku domains:add command?

Is there a better way to go about this? I know of SSL for SaaS by Cloudflare but it seems to only be for their Enterprise customers and I need a robust solution like this that I don’t mind building out but just need a few pointers (unless there is already a solution that is out there for free – no need to reinvent the wheel, eh?). Another thing, in the future I do plan to have my database running separately and load balancers pointing to multiple different instances of my app (won’t be a major issue as the DB is still central, but just worried about the IP portion of it). To recap though:

Client Domain (example.io) -> dns1.example.com -> Lets Encrypt SSL Cert -> Dokku Container -> My App

Please let me know if I need to re-explain anything, thank you!

2

Answers


  1. Your solutions is a wildcard certificate, or use app prefixing.

    So I’m not sure why you need a cert per customer, but let’s say you are going to do

    customer1.myapp.com -> routes to customer1 backend. For whatever reason.

    Let’s Encrypt lets you register *.myapp.com and therefore you can use subdomains for each customer.

    The alternative is a customer prefix.

    Say your app URL looks like www.myapp.com/api/v1/somecomand

    you could use www.myapp.com/api/v1/customerID/somecommand and then allow your load balancer to route based on the prefix and use a rewrite rule to remove the customerID back to the original URL.

    This is more complicated, and it is load balancer dependent but so is the first solution.

    All this being said, both solutions would most likely require a separate instance of your application per customer, which is a heavy solution, but fine if that’s what you want and are using lightweight containers or deploying multiple instances per server.

    Anyway, a lot more information would be needed to give a solid solution.

    Login or Signup to reply.
  2. I guess the easier way will be using Nginx container in front of python app. Because of you can just re-read Nginx config without restarting server.

    I didn’t work with Dokku, so try to explain how to create infrastructure simply with docker.

    1. You have global IP (for example 1.1.1.1) on your router that forwarding 80 and 443 port to internal server IP ( let it be 192.168.1.100). global 80 to local 8080 and global 443 to local 8443.
    2. You have Nginx container that listening 192.168.1.100:8080 and 192.168.1.100:8443 and proxying all requests from 8443 to 127.0.0.1:8888 by HTTP protocol
    3. You have application container that listening 127.0.0.1:8888 and accessible only from local server

    Workflow will be:

    1. Customer register DNS A record: customer.domain.com A 1.1.1.1 and pointing it to your server
    2. Customer adds domain to your DB using some control web page (depends on your service)
    3. New record in DB starts worker container by some trigger
    4. Worker container starts to check is domain exists by using some nslookup tool
    5. If domain registered, worker starts let’s encrypt script and issues new certificate (by adding .well-known/acme-challenge file to Nginx html folder)
    6. if certificate is successfully issued, worker adds new config file into Nginx container (the better way will be mount config directory from docker host) and reload Nginx configuration
    7. Profit

    If you need to use your own DNS server, You must to do it controllable programmatically using some API to be able add DNS A records by worker. (Good news! in this case you can use Let’s Encrypt’s DNS challenge by adding acme-challenge code as DNS TXT (or maybe CNAME?) record instead of Nginx .well-known directory)

    Actually realization is deeply depends on infrastructure that you have. And that is really big task and depends on many details. So it hard to answer better without knowing it.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search