skip to Main Content

I am trying to deploy a python flask application with gunicorn and nginx . I am trying to run both gunicorn(wsgi) and nginx in same container . But my nginx is not started . By login into container I am able to start nginx.
Below is my dockerfile


RUN apt-get clean && apt-get -y update

RUN apt-get -y install 
    nginx 
    python3-dev 
    curl 
    vim 
    build-essential 
    procps

WORKDIR /app

COPY requirements.txt /app/requirements.txt
COPY nginx-conf  /etc/nginx/sites-available/default
RUN pip install -r requirements.txt --src /usr/local/src

COPY . .

EXPOSE 8000
EXPOSE 80
CMD ["bash" , "server.sh"]

server.sh file looks like


# turn on bash's job control
set -m

gunicorn  --bind  :8000  --workers 3 wsgi:app
service nginx start or /etc/init.d/nginx

gunicorn is started by server.sh but nginx is not started.

My aim is to later run these containers in kubernetes. Should i) I run both nginx and gunicorn in separate pod or ii) run it in same pod with separate container or iii) run in same container in same pod

3

Answers


  1. About choosing how to split containers between pods, that really depends on the use-case. If they talk to each other but perform separate tasks, I would go with two containers and one pod.

    Also, about your server.sh file, the reason gunicorn starts but nginx doesn’t is that gunicorn doesn’t run in daemon mode by default. If you run gunicorn --help you see this:

      -D, --daemon          Daemonize the Gunicorn process. [False]
    

    I still think it’s better to separate the containers but if you want it to just work, change it to this:

    # turn on bash's job control
    set -m
    
    gunicorn  --bind  :8000  --workers 3 wsgi:app -D
    service nginx start or /etc/init.d/nginx
    
    Login or Signup to reply.
  2. To answer your question regarding Kubernetes:

    It depends on what you want to do.

    Containers within the same Pod share the same network namespace, meaning that 2 containers in the same Pod can communicate with each other by contacting localhost. This means your packages never get send around and communication is always gonna be possible.

    If you split them up into separate Pods you will want to create a Service object and let them communicate via that Service object. Having them in two Pods allows you to scale them up and down individually and overall gives you more options to configure them individually, for example by setting different kind of security mechanisms.

    Which option you choose depends on your architecture and what you want to accomplish.
    Having two containers in the same Pod is usually only done when it follows a "Sidecar" pattern, which basically means that there is a "main" container doing the work and the others in the Pod simply assist the "main" container and have no reason whatsoever to exist on their own.

    Login or Signup to reply.
  3. My aim is to later run these containers in kubernetes. Should i) I run both nginx and gunicorn in separate pod

    Yes, this. This is very straightforward to set up (considering YAML files with dozens of lines "straightforward"): write a Deployment and a matching (ClusterIP-type) Service for the GUnicorn backend, and then write a separate Deployment and matching (NodePort- or LoadBalancer-type) Service for the Nginx proxy. In the Nginx configuration, use a proxy_pass directive, pointing at the name of the GUnicorn Service as the backend host name.

    There’s a couple of advantages to doing this. If the Python service fails for whatever reason, you don’t have to restart the Nginx proxy as well. If you’re handling enough load that you need to scale up the application, you can run a minimum number of lightweight Nginx proxies (maybe 3 for redundancy) with a larger number of backends depending on the load. If you update the application, Kubernetes will delete and recreate the Deployment-managed Pods for you, and again, using a separate Deployment for the proxies and backends means you won’t have to restart the proxies if only the application code changes.

    So, to address the first part of the question:

    I am trying to deploy a python flask application with gunicorn and nginx.

    In plain Docker, for similar reasons, you can run two separate containers. You could manage this in Docker Compose, which has a much simpler YAML file layout; it would look something like

    version: '3.8'
    services:
      backend:
        build: . # Dockerfile just installs GUnicorn, CMD starts it
      proxy:
        image: nginx
        volumes:
          - ./nginx-conf:/etc/nginx/conf.d # could build a custom image too
            # configuration specifies `proxy_pass http://backend:8000`
        ports:
          - '8888:80'
    

    This sidesteps all of the trouble of trying to get multiple processes running in the same container. You can simplify the Dockerfile you show:

    # Dockerfile
    FROM python:3.9
    RUN apt-get update 
     && DEBIAN_FRONTEND=noninteractive
        apt-get install --no-install-recommends --assume-yes 
        python3-dev 
        build-essential
    # (don't install irrelevant packages like vim or procps)
    WORKDIR /app
    COPY requirements.txt .
    RUN pip install -r requirements.txt
    COPY . .
    EXPOSE 8000
    # (don't need a shell script wrapper)
    CMD gunicorn --bind :8000 --workers 3 wsgi:app
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search