skip to Main Content

I have a Django app. I need to dockerize it. The builtin database gets updated time to time. So I need to reflect the changes in host machine. My DockerFile is like this:

FROM python:3.11-slim-buster
ENV PYTHONUNBUFFERED=1
RUN pip install --upgrade pip

WORKDIR /app

COPY requirements.txt ./

RUN pip install --no-cache-dir -r requirements.txt

And the docker-compose file is:

version: '3'

services:
  better_half:
    container_name: better-half-django
    build:
      context: .
    command: bash -c "python manage.py runserver 0.0.0.0:8000"
    volumes:
      - .:/app
    env_file:
      - .env
    ports:
      - "8000:8000"

I have used bind-mount to reflect the changes. The app runs perfectly in this configuration. But I am not sure if is it the best practice or not.

I want to know the best practice. Should I use copy command in the DockerFile to copy all the project code to the app directory of the docker image? I am a newbie. Can anyone help me? Thanks in advance.

2

Answers


  1. Because it lets you see changes made on your machine within a running container without needing to reconstruct the Docker image, a bind-mount – like the one used in your Docker Compose file – is a popular choice for development environments. This helps simplify the development process, since you can edit code locally and quickly observe the impact of those changes.

    To ensure consistency in different environments, it’s best to use a Docker image copy of your code when dealing with production situations. This approach ensures that the code is self-contained within the image, resulting in a consistent deployment across various environments. It’s important to avoid relying on the host machine’s code during the deployment process, and this method ensures that.

    Your Dockerfile can be modified in the following manner to include your Django app code into the Docker image:

    FROM python:3.11-slim-buster
    ENV PYTHONUNBUFFERED=1
    RUN pip install --upgrade pip
    
    WORKDIR /app
    
    COPY . /app
    
    RUN pip install --no-cache-dir -r requirements.txt
    

    The dependencies specified in the requirements.txt file will be installed by the subsequent pip install command after the Dockerfile copies everything from your current directory to the Docker image /app directory.

    To update your code, a Docker image rebuild is necessary with this technique. docker-compose build must be run to rebuild before initiating the containers with docker-compose up.

    Since bind-mounts are not being utilized to synchronize code changes, it is important to take out the volumes section from your Docker Compose file.

    Consistency and reproducibility are essential in production environments, making this setup ideal. But for development purposes, a more streamlined and quicker feedback loop can be achieved through the use of bind-mounts.

    Login or Signup to reply.
  2. Your image doesn’t seem to contain any of the actual application code, but instead has it injected via a bind mount. I would not consider this a best practice. Normally the image should be completely self-contained. Consider deploying the application to a remote server: you should be able to install Docker and copy the image to the server as-is, without separately also copying the source code. (In practice you do need to copy the docker-compose.yml file.)

    You mention a database. A very common practice in Docker is to use a relational database in a separate container. There are prebuilt images for, for example, PostgreSQL that you can just use.

    So in your Dockerfile, do COPY your application code in, and do declare the default CMD your application should run.

    FROM python:3.11-slim-buster
    ENV PYTHONUNBUFFERED=1
    WORKDIR /app
    
    COPY requirements.txt ./
    RUN pip install --no-cache-dir -r requirements.txt
    
    # add
    COPY ./ ./
    CMD python manage.py runserver 0.0.0.0:8000
    # (consider making manage.py executable to avoid saying `python` explicitly)
    

    In your Django configuration, it will help to make it possible to provide the database settings via environment variables. This will especially help since the database hostname will be different in a container from in your host development environment.

    # settings.py
    DATABASES = {
        "default": {
            "ENGINE": "django.db.backends.postgresql",
            "NAME": os.environ.get("DATABASE_NAME", "mydatabase"),
            "USER": os.environ.get("DATABASE_USER", "mydatabaseuser"),
            "PASSWORD": os.environ.get("DATABASE_PASSWORD", "mypassword"),
            "HOST": os.environ.get("DATABASE_HOST", "127.0.0.1"),
            "PORT": os.environ.get("DATABASE_PORT", "5432")
        }
    }
    

    Now in your docker-compose.yml file you need to provide both the application and its database. I’ve written out the connection information in the Compose file here, but it could go into the .env file as well. The database ports: aren’t required, but you can add them to make the database accessible from the host, possibly for local development (you may need to change the first port number only if you’re running another PostgreSQL server on the host).

    version: '3.8'
    services:
      better_half:
        build: .
        env_file:
          - .env
        ports:
          - "8000:8000"
        environment:
          DATABASE_HOST: database
          DATABASE_USER: postgres
          DATABASE_NAME: postgres
          DATABASE_PASSWORD: passw0rd
      database:
        image: postgres:15
        volumes:
          - dbdata:/var/lib/postgresql/data
        environment:
          POSTGRES_PASSWORD: passw0rd
        # ports: ['5432:5432']
    volumes:
      dbdata:
    

    To answer your original question, note that there are no bind mounts at all in this setup. You may need to back up and restore the database to run the application somewhere else, but you do not need any of the code separate from their Docker images.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search