The issue is basically that socketio is hanging at 250 concurrent. Whether via node app.js –nouse-idle-notification –max-old-space-size=8192 ulimit -n 2048 --expose-gc
oder pm2
, it has the same effect.
I’m running centOS7. I have 30GB of ram and it’s a proper VPS.
I have tested locally and on production server, with pm2 and without, with cluster and without. It always stops at 252/256 and says no more resources. I’m only connecting, nothing else is sent.
Here’s the most basic example that I’ve used.
import express from "express"
const app = express();
import http from "http"
const server = http.createServer(app);
import { Server } from "socket.io"
const io = new Server(server);
io.on('connection', (socket) => {
console.log('a user connected');
});
server.listen(3000, () => {
console.log('listening on *:3000');
});
I have a SocketIO running on SSL-domain, with reverse proxy, along with cluster nodes. Server is apache.
The problem is simple. Once the server reaches 256 connections, it shuts down, which is weird.
The logs show reason transport close
using pm2 logs > yourlogFile.txt &
I’m running a stress-test using npx artillery run my-scenario.yml
, the yml
file is the default from socket.io
docs, but set it to websocket
only as transports.
app.js
has a redis for the cluster adapter. I’m using admin-ui
to monitor the connection. It shows 6 servers created
then once it reaches 256 connections (1 connection via admin-ui
and 255 connections via artillery
stress test), it shuts down.
import {createServer} from "http";
const httpServer = createServer(app);
import {Server} from "socket.io";
import {createAdapter} from "@socket.io/redis-adapter";
import {setupWorker} from "@socket.io/sticky";
import { createClient } from "redis";
const pubClient = createClient({ host: 'localhost', port: 6379 });
const subClient = pubClient.duplicate();
const io = new Server(httpServer, {...})
Apache’s config is straightforward:
SSLEngine on
ProxyRequests off
ProxyPass "/websocket/socket" balancer://nodes_ws/
ProxyPassReverse "/websocket/socket" balancer://nodes_ws/
ProxyTimeout 3
Header add Set-Cookie "BlazocketServer=sticky.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy "balancer://nodes_polling">
BalancerMember "https://localhost:3000" route=app01
BalancerMember "https://localhost:3001" route=app02
BalancerMember "https://localhost:3002" route=app03
ProxySet stickysession=BlazocketServer
</Proxy>
<Proxy "balancer://nodes_ws">
BalancerMember "ws://localhost:3000" route=app01
BalancerMember "ws://localhost:3001" route=app02
BalancerMember "ws://localhost:3002" route=app03
ProxySet stickysession=BlazocketServer
</Proxy>
RewriteEngine On
#RewriteCond %{QUERY_STRING} transport=polling
#RewriteRule /(.*)$ http://localhost:3000/$1 [P]
RewriteCond %{HTTP:Upgrade} =websocket [NC]
RewriteRule /(.*) balancer://nodes_ws/$1 [P,L]
RewriteCond %{QUERY_STRING} transport=polling
RewriteRule /(.*) balancer://nodes_polling/$1 [P,L]
the scenario is simple
config:
target: "myurl"
socketio:
path: "mypath"
transports: ["websocket"]
phases:
- duration: 60
arrivalRate: 10
engines:
socketio-v3: {}
scenarios:
- name: My sample scenario
engine: socketio-v3
flow:
# wait for the WebSocket upgrade (optional)
- think: 1
# basic emit
- emit:
channel: "hello"
data: "world"
# emit an object
- emit:
channel: "hello"
data:
id: 42
status: "in progress"
tags:
- "tag1"
- "tag2"
# emit with acknowledgement
- emit:
channel: "ping"
acknowledge:
match:
value: "pong"
# do nothing for 30 seconds then disconnect
- think: 30
After checking manually to connect in a for-loop with timed interval, i’m getting failed: failed: Insufficient resources
2
Answers
Try multiplexing to reduce socket connections
I think the issue is Apache and its
MaxClients
setting, which by default is 256.This thread on serverfault details comprehensively how to approach changing the setting.
To understand how threads on Apache translates to maximum clients that can be served , this documentation and this discussion are good references.