I’m writing an application in golang. It generates a data structure in memory for a database entry upon request. Often when an entry is requested, it is requested multiple times, so I want to cache the entry in memory, to avoid multiple calls to the database (mostly for latency reasons).
Is it possible to have this in-memory cache expand dynamically in memory until we hit memory pressure (i.e. failed malloc) and then free some of the cache?
Caching in Redis or similar would complicate deployment. If that’s the only other option, I’d prefer just to specify a static cache-size at run-time.
I’m not opposed to using C.malloc
I suppose, but I don’t know how that interacts with the Go memory management (if I allocate a chunk of memory, and then the go runtime allocates a chunk for a goroutine stack or something on top of the heap, then I can’t release my memory to the OS until whatever’s on top is freed). Also, I’m compiling without cgo so far, and it’d be nice to continue to do so.
I’m hoping there’s something in the debug
or runtime
package that might hint that the system is under memory pressure so that I can dynamically size my cache and keep the program in pure Go.
Any help or insight is greatly appreciated.
2
Answers
This answer has a solution to getting memory allocation at runtime.
Here’s a starting point for a concurrent-safe cache using that code:
One easy option would be to use groupcache. While its intended goal is to provide a in memory cache that is shared by an application cluster, it can work stand alone just fine.
The following code setups up a groupcache cache, without any peers (so only your own instance of your application) with one "group" of items, limited to 64mb of memory. Once that limit is reached, the internal LRU cache will evict an item.
Since groupcache is intended to basically cache "blobs" and distribute them across a cluster, its "sinks" (the structure you push your data into) can only operate on strings, byte slices and protobuf messages. So if you want to cache some database results you’ll need to serialize it to something like JSON.
If you would want to cache different "types of things", each with their own amount of maximum memory, you can create multiple groups.
Another tradeoff is that it is not possible to delete items, not can you set an expiry. If you require either of those, groupcache probably isn’t for you.