My team and I are using a CRUD repository with Redis to perform some operations on it. The problem is that there is an entry in Redis that gets generated as an index which stores the keys associated to the Redis entries, and this entry never removes entries that were already flushed when TTL reached 0.
Here is an example of the code we use.
@RedisHash("rate")
public class RateRedisEntry implements Serializable {
@Id
private String tenantEndpointByBlock; // A HTTP end point
...
}
// CRUD repository.
@Repository
public interface RateRepository extends CrudRepository<RateRedisEntry, String> {}
This generates the entry rate
in Redis which is the Set object I mentioned before.
When we check the memory usage on it, it just keeps growing all the time until the memory usage reaches 100% from the available in Redis.’
> MEMORY USAGE "rate"
(integer) 153034
.
.
> MEMORY USAGE "rate"
(integer) 153876
.
.
> MEMORY USAGE "rate"
(integer) 163492
Is there a way to prevent this index from being created or for the values stored to be removed once the entries’ TTL reaches 0?
Any assistance is appreciated.
2
Answers
I found a possible solution to the problem. You could set the repository options to track the event where an entry's TTL reaches 0.
This will make Spring Data Redis to track the entries that are flushed, so it updates the entries that are indexes.
However, this will introduced some extra processing on your system, so you should be aware of it and evaluate if working with
RedisRepository
makes sense.In our case, we decided that we were gonna work with the
RedisTemplate
class directly, to avoid any extra over head or uncontrolled processing.If you are interested in seeing all the details of how we fixed it, you can read this article I wrote about it.
https://engineering.salesforce.com/lessons-learned-using-spring-data-redis-f3121f89bff9
The Object-Mapping/Repositories need a way to find Objects in bulk (
findByKeys(id1, id2,...)
) and do pagination as well as other things that make it necessary to maintain a primary key Set. Repositories respect the TTL values and purge the keys/ids from the created Set:So my guess is that somehow the app is not getting the keyspace notifications for the entities TTL expires.
Or…
I’m thinking there is a possibility where you are generating
RateRedisEntry
at a rate that the clean up of the Set via keyspace notifications cannot catch up. If that’s the case, then I would not use@RedisHash
and Repositories and instead use HashMapper and HashOperations. See https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis.hashmappers.root