skip to Main Content

I have a local Docker setup consisting of four containers: a flask web app, MySQL, Redis, and an RQ worker.

The setup is essentially the same as Miguel Grinberg’s Flask Mega-Tutorial. Here are links for his tutorial and his code.

The only difference in my case is that I’ve replaced his export blog post function, which runs on the rq-worker, with another that is incredibly computationally intensive and long running (30 minutes).

What is the best way for me to deploy this application for production?

I only expect it to be accessed by a one or two people at a time and for them to visit only once or twice a week.

I’ve been looking into Kubernetes examples but I’m having difficulty translating them to my setup and figuring out how to deploy to GCP. I’m open to other deployment options.

Here are the docker run commands from the tutorial:

docker run --name redis -d -p 6379:6379 redis:3-alpine

docker run --name mysql -d -e MYSQL_RANDOM_ROOT_PASSWORD=yes 
    -e MYSQL_DATABASE=flaskapp -e MYSQL_USER=flaskapp 
    -e MYSQL_PASSWORD=mysqlpassword 
    mysql/mysql-server:5.7
docker run --name rq-worker -d --rm -e SECRET_KEY=my-secret-key 
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true 
    -e [email protected] -e MAIL_PASSWORD=mysqlpassword 
    --link mysql:dbserver --link redis:redis-server 
    -e DATABASE_URL=mysql+pymysql://flaskapp:mypassword@dbserver/flaskapp 
    -e REDIS_URL=redis://redis-server:6379/0 
    --entrypoint venv/bin/rq 
    flaskapp:latest worker -u redis://redis-server:6379/0 dyson-tasks
docker run --name flaskapp -d -p 8000:5000 --rm -e SECRET_KEY=my_secret_key 
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true 
    -e [email protected] -e MAIL_PASSWORD=mypassword 
    --link mysql:dbserver --link redis:redis-server  
    -e DATABASE_URL=mysql+pymysql://flaskapp:mysqlpassword@dbserver/flaskapp 
    -e REDIS_URL=redis://redis-server:6379/0 
    flaskapp:latest 

2

Answers


  1. Since you tag the question with Kubernetes and Google Cloud Platform, I expect that is the direction that you want.

    When deploying to a cloud platform, consider to use a cloud ready storage / database solution. A single-node MySQL is not a cloud ready storage out of the box. Consider using e.g. Google Cloud SQL instead.

    Your “flask web app” can perfectly be deployed as a Deployment to Google Kubernetes Engine – but this require that your app is stateless and follow the twelve-factor app principles.

    Your Redis can also be deployed to Kubernetes, but you need to think about how important your availability requirements are. If you don’t want to think about this, you can also use Google managed Redis, e.g. Google memorystore – a fully-managed in-memory data store service for Redis.

    If you decide to use a fully managed cache, you could potentially deploy your “flask web app” as a container using Google Cloud Run – this is a more managed solution than a full Kubernetes cluster, but also more limited. But the good think here is that you only pay for requests.

    Login or Signup to reply.
  2. I made a getting started project for a rq on GCP/GKE that should roughly fit your needs.
    https://github.com/crispyDyne/GKE-rq

    It consists of three workloads:

    • Leader: A flask app that receives external traffic and enqueues jobs.
    • Worker: Starts an rq worker.
    • Dashboard: Serves a dashboard for monitoring the queue.

    A google memory store instance is used for the redis server. If you are looking to do this on the cheap, this should be swapped out for a container running redis (but I was lazy).

    How to create memory store redis instance: https://cloud.google.com/memorystore/docs/redis/quickstart-gcloud

    Also, I use a node port to expose the Leader app, instead of a load balancer. Load balancers are surprisingly expensive. For small/cheap projects, the node port should work fine.


    There were a few non-obvious steps for a novice (like me) to get everything talking.

    How to connect the "Leader" to the redis queue.

    Get the redis IP from the GCP console or gcloud redis instances describe ....

    In your flask app:

    redis_conn = Redis(host={redisIP}, port=6379, db=0)
    q = Queue('rq-server', connection=redis_conn)
    

    How to connect the "Folower" to the redis queue.

    DockerFile:

    FROM python:3
    COPY . /app
    WORKDIR /app
    RUN pip install --no-cache-dir -r requirements.txt
    ENV PORT 6379
    CMD [ "rq", "worker","--url", "redis://{redisIP}:6379", "rq-server" ]
    

    How to connect the "Dashboard" to the redis queue.

    DockerFile:

    FROM python:3
    COPY . /app
    WORKDIR /app
    RUN pip install --no-cache-dir -r requirements.txt
    ENV PORT 9181
    CMD [ "rq-dashboard","-u", "redis://{redisIP}:6379"]
    

    How to expose the Leader through a node port (or similarly the Dashboard).

    Note the target port. Should be 8080 for a flask app, or 9181 for the rq-dashboard.

    kubectl expose deployment rq-leader --name rq-leader-service /
     --type NodePort --port 80 --target-port 8080
    

    Then open the {NodePort}. Get the node port on the GCP console or with kubectl get services ....

    gcloud compute firewall-rules create node-port-leader--allow tcp:{NodePort}
    
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search