I have a game server running in legacy appengine 2.7. I’ve migrated the server to py3/flask and all the various bits. I’ve got the new server connected to a Redis instance while the old server is using the memcache for py2.7.
I’m thinking of connecting the 2.7 version of my server to Redis memcache since I’ve got it running as a staged migration. That way I can split traffic between the py3 server and the py2 server and they will be using the same memcache server as I test etc.. I can have a few beta users talking to the new server and co-exist with the current server.
I’ve got a version of my py2.7 server talking to redis.. but I’m finding the docs on using Redis a bit confusing for storing ndb models. It appears I can’t just populate an ndb model class directly in Redis as I did with the old legacy memcache.
In the migration docs it says you can setup redis as a "global cache" with appengine with the following example:
client = ndb.Client()
global_cache = ndb.RedisCache.from_environment()
with client.context(global_cache=global_cache):
books = Book.query()
for book in books:
print(book.to_dict())
I’m not super clear on what this means. Do I need to structure all my queries this way that I want cached or is there a one time setup and then models will just auto-cache? would the above example pull from the cache automatically if it exists?
Currently in legacy memcache server I collect a bunch of model instances (or games in my case that a particular users is a member of) into a python list and cache them with a key like
cacheKey = userskey.urlsafe()+"_gamesList"
Any time a game for this user is changed in some way (updated, deleted, added..etc) I delete the cacheKey.. the next time that user queries their games list I rebuild the cache. It doesn’t look like I can store data like that in Redis and simply. Also since that user interacts with the server may times I’d just want the model that represents that user cached.. every time that user talks to my server I need to recall their user account with the key
I guess I’m just confused about global cache and how this all differs from the legacy memcache. Currently I get close to 90% cache hits with the legacy memcache so it’s certainly worth it to try and replicate this with Redis. Maybe even just a pointer to a practical example app would be helpful. Surprisingly searching around I’ve not found that.
2
Answers
The term ‘global cache’ is a bit confusing, it just means the MemoryStore Redis instance can be accessed from outside of your project.
If you want to keep using NDB, you’ll need to migrate to CloudNDB first. To answer one of your questions – No there is no auto-caching of anything with MemoryStore.
However my suggestion here is – if all you’re doing is storing key-value pairs in Redis, why not just use the native Redis functionality?
Then you can get and set the objects as bytes
Also MemoryStore is expensive, I use FakeRedis to test locally
Update – from your comment regarding the UnicodeDecodeError, I don’t know what data is in your dictionary that is causing issues, you could try to to convert to json and for anything that is not serializable (like datetime objects), just convert them to str
As mentioned by rossco, there is no auto-caching in Memorystore Redis. In Legacy Memcache, the cache is global and is shared across the application’s frontend, backend, and all of its services and versions by default. The
GlobalCache
interface shares the same description. However, there are differences such as aRedisCache
object (which is an implementation ofGlobalCache
) is created and passed on to the client context, as seen from the example you provided, contrary to how you create a Legacy Memcache client. With regards to the exposed methods, there are also differences which can seen in these following docs.Once the client context is initialized, all operations using that context will use the the single global cache object. Constructing more than one context using the same global_cache, or using dependency injection to pass around the same context will be one-time setup.