I have a (fairly complex) django app hosted on AWS via Elastic Beanstalk, and am trying to implement websockets on it, using django-channels
Here is the docker-compose.yml file sent to elastic beanstalk:
version: "3.8"
services:
migration:
build: .
env_file:
- .env
command: python manage.py migrate
api:
build: .
env_file:
- .env
command: daphne myapp.asgi:application --port 8000 --bind 0.0.0.0
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
expose:
- 8000
depends_on:
- migration
nginx:
image: nginx:1.21.0-alpine
env_file:
- .env
volumes:
- ./nginx/templates:/etc/nginx/templates
- ./nginx/certs:/etc/nginx/certs
ports:
- 80:80
- 443:443
depends_on:
- api
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
Here is the nginx config file:
server {
listen 80;
listen [::]:80;
location /health/ {
proxy_pass http://api:8000/health/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
root /usr/share/nginx/html/;
index index.html;
ssl_certificate /etc/nginx/certs/public.crt;
ssl_certificate_key /etc/nginx/certs/myapp.pem;
location / {
proxy_pass http://api:8000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_read_timeout 300s;
}
client_max_body_size 5M;
}
When running on my local machine, everything runs smooth. but after deployment of this configuration on an EC2 instance, when I try to connect to my websocket using:
var websocket = new WebSocket('wss://my-app.com/ws/room/13/');
All I get from my nginx logs is the following (nothing is outputed in my django logs):
ZZZ.ZZ.ZZ.ZZ - - [03/May/2023:15:16:34 +0000] "GET /ws/room/13/ HTTP/1.1" 400 5 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0" "XX.XXX.XX.XX, YY.YYY.YYY.YY"
If I turn debug on nginx, I can isolate the following error:
2023/05/03 15:16:34 [debug] 36#36: *21 http proxy status 400 "400 WebSocket connection denied - Hixie76 protocol not supported."
An error which doesn’t provide much information online… I have tried every websocket client I could find, and this is still the error I am getting.
Does anyone have a clue about what is happening?
2
Answers
So I fixed this issue a while ago, and here is some feedback! Mainly, the issue was related to Cloudfront, which I am using to manage access to my app.
The issue
My issue was due to "websocket-related" HTTP headers not being properly forwarded by Cloudfront to my app, so when nginx doesn't find them, it triggered this (inaccurate) "Hixie76 protocol not supported" error.
The fix
In Cloudfront, I simply edited the Origin request policy for the behavior of my websocket route, and added the following headers to be included in origin requests:
Sec-WebSocket-Key
Sec-WebSocket-Version
Sec-WebSocket-Protocol
Sec-WebSocket-Accept
No more error after that!
The idea is that our daphne service is actually running our python applications local to the server. Whereas your daphne command specifies a asgi.py file, our server configuration also needs to specify a path to access this websocket application running locally. We need to redirect
wss
tows
. The following is an apache server configuration which I hope you should be able to translate to nginx.You may also be required to specify the SSL certificate for an encrypted connection in production.