I’m running a Django application in a Docker container, and I’m having trouble serving static files in production. Everything works fine locally, but when I deploy to production, the static files don’t load, and I get 404 errors.
Here are the relevant parts of my setup:
Django settings.py
:
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'build')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = '/vol/web/static'
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'build', 'static')]
The build folder was generated by npm run build command in a react application.
After running collectstatic
, the volume /vol/web/static
is correctly populated. However, the browser shows 404 errors for the static files, e.g.,
GET https://site/static/js/main.db771bdd.js [HTTP/2 404 161ms]
GET https://site/static/css/main.4b763604.css [HTTP/2 404 160ms]
Loading failed for the <script> with source “https://mysite/static/js/main.db771bdd.js”.
These files exist in the build/static
directory, but I thought the browser should use the static files collected into /vol/web/static
.
Nginx Configuration:
server {
listen ${LISTEN_PORT};
location /static {
alias /vol/static;
}
location / {
uwsgi_pass ${APP_HOST}:${APP_PORT};
include /etc/nginx/uwsgi_params;
client_max_body_size 10M;
}
}
Dockerfile:
FROM python:3.9-alpine
ENV PYTHONUNBUFFERED 1
ENV PATH="/scripts:${PATH}"
RUN pip install --upgrade "pip<24.1"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client jpeg-dev
&& apk add --update --no-cache --virtual .tmp-build-deps
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev libffi-dev
&& pip install -r /requirements.txt
&& apk del .tmp-build-deps
RUN mkdir -p /app /vol/web/media /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol /app
COPY ./app /app
COPY ./scripts /scripts
COPY ./requirements.txt /requirements.txt
RUN chmod -R 755 /vol/web /app /scripts
&& chmod +x /scripts/*
USER user
WORKDIR /app
VOLUME /vol/web
CMD ["entrypoint.sh"]
For further context, I deployed the Django application and the proxy in separated containers inside a ECS task:
[
{
"name": "api",
"image": "${app_image}",
"essential": true,
"memoryReservation": 256,
"environment": [
{"name": "DJANGO_SECRET_KEY", "value": "${django_secret_key}"},
{"name": "DB_HOST", "value": "${db_host}"},
{"name": "DB_NAME", "value": "${db_name}"},
{"name": "DB_USER", "value": "${db_user}"},
{"name": "DB_PASS", "value": "${db_pass}"},
{"name": "ALLOWED_HOSTS", "value": "${allowed_hosts}"},
{"name": "S3_STORAGE_BUCKET_NAME", "value": "${s3_storage_bucket_name}"},
{"name": "S3_STORAGE_BUCKET_REGION", "value": "${s3_storage_bucket_region}"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group_name}",
"awslogs-region": "${log_group_region}",
"awslogs-stream-prefix": "api"
}
},
"portMappings": [
{
"containerPort": 9000,
"hostPort": 9000
}
],
"mountPoints": [
{
"readOnly": false,
"containerPath": "/vol/web",
"sourceVolume": "static"
}
]
},
{
"name": "proxy",
"image": "${proxy_image}",
"essential": true,
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
],
"memoryReservation": 256,
"environment": [
{"name": "APP_HOST", "value": "127.0.0.1"},
{"name": "APP_PORT", "value": "9000"},
{"name": "LISTEN_PORT", "value": "8000"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group_name}",
"awslogs-region": "${log_group_region}",
"awslogs-stream-prefix": "proxy"
}
},
"mountPoints": [
{
"readOnly": true,
"containerPath": "/vol/static",
"sourceVolume": "static"
}
]
}
]
The entrypoint.sh script called by the Dockerfile is given by
#!/bin/sh
set -e
python manage.py collectstatic --noinput --settings=app.settings.staging
python manage.py wait_for_db --settings=app.settings.staging
python manage.py wait_for_es --settings=app.settings.staging
python manage.py migrate --settings=app.settings.staging
python manage.py search_index --rebuild --settings=app.settings.staging -f
uwsgi --socket :9000 --workers 4 --master --enable-threads --module app.wsgi --env DJANGO_SETTINGS_MODULE=app.settings.staging
In terraform, my code is essentially equal to the configuration found here
I suspect there might be an issue with file permissions, but after I change the permission the errors continue. Any insights on what might be going wrong or how to debug this further?
Any help would be greatly appreciated!
2
Answers
Try to declare volumes in ECS task definition
Customize it depending on your setup.
Volumes on AWS documentation
I think the error lies in your nginx config. You are setting the alias for the
static
route to/vol/static
, instead of/vol/web/static
: