skip to Main Content

I have an EC2 instance behind a load balancer. The security group attached to it allows for inbound connections (both ipv4 and ipv6 on port 6379). I am able to connect to my redis client:

redis-cli -h ec2-**-**-**-*.us-west-1.compute.amazonaws.com -p 6379

However, when I try to connect with nodeJS and express-session I get a ConnectionTimeoutError on EC2, but locally it works fine:

const redisClient = createClient() // uses default port localhost:6379 
redisClient.connect().catch(console.error)

If there is a race condition here, like others mentioned, why does this race condition happen on EC2 and not locally? Is the default localhost incorrect since there is a load balancer in front of the instance?

2

Answers


  1. Chosen as BEST ANSWER

    Redis client should be instantiated explicitly in a setup like this one (covers both ipv4 and ipv6 inbound traffic):

    createClient({ socket: { host: '127.0.0.1', port: 6379 }, legacyMode: true })
    

    As redis is self-hosted on EC2 with a load balancer in front of the instance, localhost may not be mapped to 127.0.0.1 as a loopback address. This means that the default createClient() without a host or port specified, might try to establish a connection to a different internal, loopback address.

    (Make sure to all inbound traffic to tcp 6379, or the port you are using)


  2. Based on your comments, I’d say the problem is the load balancer. Redis communicates on a protocol based on TCP. An ALB is only for HTTP/HTTPS traffic, so it cannot handle this protocol. Use a Network Load Balancer instead, with a TCP listener. Also make sure your security group rule also allows TCP traffic for port 6379.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search