skip to Main Content

I have a Django project that runs in a Docker container. Locally when I have updated the project to include changes to the model, I have run python manage.py makemigrations and python manage.py migrate within the docker container. In particular, I would enter the container from the terminal and run python manage.py makemigrations and python manage.py migrate to execute the changes. This worked locally. However, now that I have pushed my code to production (AWS ECS) for some reason manage.py is not running to update the existing database. There is no evidence that manage.py is running in the logs, even though it is in my entrypoint.sh script. Are there any additional configs that I need to set up either in the Dajngo project or in AWS ECS to be able to run migrations in the AWS ECS environment? The project is otherwise running fine, so the docker container is up, just python manage.py migrate is never run.

I have included the Dockerfile below for reference:

FROM python:3.11-slim-bookworm

ENV uid=1000
ENV gid=1000

ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV TZ=UTC
ENV USER=app_user UID=$uid GID=$gid

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

RUN apt-get update 
  && apt-get install -y --no-install-recommends tzdata libpcre3 libpython3-dev libpcre3-dev python3-pip build-essential libpq-dev 
  && rm -rf /var/lib/apt/lists/*

RUN mkdir -p /app

COPY Makefile pyproject.toml poetry.lock /tmp/

RUN pip install poetry
RUN poetry config virtualenvs.create false

WORKDIR /tmp


RUN make requirements.txt

RUN pip install --no-cache-dir -r requirements.txt 
    && rm -rf requirements.txt 
    && groupadd -f --gid "${GID}" "${USER}" 
    && useradd ${USER} --uid ${UID} --gid ${GID} 
    && install -d -m 0755 -o ${USER} -g ${USER} /app/static

RUN apt-get purge -y libpython3-dev python3-pip build-essential libpq-dev libpcre3-dev 
    && apt-get clean

RUN rm -rf /tmp/*

COPY book_store /app

COPY entrypoint.sh /app/entrypoint.sh
COPY post-deployment.sh /app/post-deployment.sh

WORKDIR /app

ENTRYPOINT ["./entrypoint.sh"]

I have included the entrypoint.sh below for reference:

./manage.py wait_for_database
./manage.py migrate
./manage.py createsuperuser --noinput || true
./manage.py collectstatic --noinput
exec uwsgi --ini uwsgi.ini

Any assistance would be greatly appreciated!

2

Answers


  1. It could be a permissions issue. You may need to ensure that the user running the django application within the docker container has the permission to run the migrations commands..

    Maybe try to be explicit by setting the owner of the app directory to the specified user. Try adding the following below your mkdir command in your dockerfile or anywhere suitable.

    RUN mkdir -p /app
    
    # Set ownership for the /app directory
    RUN groupadd -f --gid "${GID}" "${USER}" 
    && useradd ${USER} --uid ${UID} --gid ${GID} 
    && chown -R ${USER}:${USER} /app
    
    Login or Signup to reply.
  2. You are running the makemigrations command from a docker container locally. Any changes you make in a docker container do not get saved to the image the container was created from. A Docker image is like a snapshot of a point in time. Anything you do with container(s) you create from that image do not affect the image itself. To create a new docker image from a running container, to include updates you have made in that container, you would use the docker commit command. There is an issue here however, that you would be starting from an empty set of migrations every time you create an image, so the migrations that had already been applied would be missing from the /migrations folder the next time you create an image, which is going to result in errors.

    I do not recommend running makemigrations in the entrypoint.sh, because you will run into the same issue with the previous migration files being missing, and the new migrations not matching exactly what has been applied already, causing errors on subsequent makemigrations runs.

    The typical way to manage Django migration files is to create them once, and save them as part of your source code in your source repository (Git). You would do this before creating the container image from the Dockerfile, so that the migrations would automatically be part of the Python code that gets included in the created image. By storing the migration files in your Git repository, when you later run makemigrations again, it will have the previous migrations available in the /migrations folder, and will build on those, instead of creating an entirely new set of migrations each time. This is the method I recommend.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search