Context: I am building an application that will be accessed concurrently by 1000 users and using Redis as the database. I am using node-redis client. I read that it’s recommended to have only one or few connections open from each instance of the application because opening connections is expensive.
Question: Suppose the client sends a command to Redis. This command is in transit to Redis or being executed (essentially, not completed) when the application needs to send another command. Does the client wait for the first command to be completed or does it fire the second command to Redis right then?
This is important because if the client waits, the application is not really able to make full use of async and the network calls will become a big bottleneck when commands from a thousand users are trying to access Redis. It’s better that the commands are queued up in Redis already instead of having them wait for an earlier command to complete before going through the network.
Thanks!
2
Answers
You can run multiple instances of your node application and then balance the load using Nginx (assuming you’re using it). The client won’t have to wait and the load will be balanced.
Use Redis Pub/Sub to create connections.
Here’s documentation for reference.
Update: Found a detailed step by step explanation here
It based on how the Redis client implemented.
On Redis server side, all commands are execute serializable. So single connection will not be a problem for Redis server.
On client side, if your framework uses blocking TCP connection(like Jedis), few connections will be a bottle neck as your commands will be blocked on client side to waiting for idle connections.
But if the client is using async / nio(like Lettuce), single connection will be ok since connections can be reused between threads.
Additionally, thousands of concurrent users will not be a problem for Redis but you should also focus on your web service if you are using single web server.