skip to Main Content

I have my application deployed in openshift and also it uses Redis. White it works most of the time , I still face issue related to redisson which is intermittent. The error trace is as below while launching the url of application :-

org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=0, freeSubscribeConnectionsCounter=value:49:queue:0, freeConnectionsAmount=31, freeConnectionsCounter=value:63:queue:0, freezed=false, freezeReason=null, client=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@1568202974 [redisClient=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], channel=[id: 0xceaf7022, L:/10.103.34.74:32826 ! R:webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com/10.112.17.104:6379], currentCommand=CommandData [promise=RedissonPromise [promise=ImmediateEventExecutor$ImmediatePromise@68b1bc80(failure: java.util.concurrent.CancellationException)], command=(HMSET), params=[redisson:tomcat_session:306A0C0325AD2189A7FDDB695D0755D2, PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), ...], codec=org.redisson.codec.CompositeCodec@25e7216]], command: (HMSET), params: [redisson:tomcat_session:77C4BB9FC4252BFC2C8411F3A4DBB6C9, PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256)] after 3 retry attempts
    org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:872)
    org.redisson.command.CommandAsyncService.access$000(CommandAsyncService.java:97)
    org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:791)
    org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:788)
    io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:502)
    io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:476)
    io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:415)
    io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:540)
    io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:533)
    io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:114)
    io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1018)
    io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:874)
    io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1365)
    io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:716)
    io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708)
    io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:56)
    io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102)
    io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149)
    io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
    io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
    io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
    io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.lang.Thread.run(Thread.java:748)
Root Cause

io.netty.channel.ExtendedClosedChannelException
    io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
Note The full stack trace of the root cause is available in the server logs.

2

Answers


  1. Chosen as BEST ANSWER

    This was probably because of increased load on the redis cluster as it was being shared among the number of application. As a workaround I did try with redeploy everytime I see this and thus the connection reset happens which resolves the issue. As I said this is just a workaround and the permanent solution perhaps will be to have a dedicated redis cluster for your application which again depends on the architecture, size of your application .


  2. you need to update your redisson version to 3.16.3 to check updated exception . So as per this , you need to increase your connection pool size.

            private void checkWriteFuture(ChannelFuture future, RPromise<R> attemptPromise, RedisConnection connection) {
        if (future.isCancelled() || attemptPromise.isDone()) {
            return;
        }
    
        if (!future.isSuccess()) {
            exception = new WriteRedisConnectionException(
                    "Unable to write command into connection! Increase connection pool size. Node source: " + source + ", connection: " + connection +
                            ", command: " + LogHelper.toString(command, params)
                            + " after " + attempt + " retry attempts", future.cause());
            if (attempt == attempts) {
                attemptPromise.tryFailure(exception);
            }
            return;
        }
    
        timeout.cancel();
    
        scheduleResponseTimeout(attemptPromise, connection);
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search