skip to Main Content

I have a game server running in legacy appengine 2.7. I’ve migrated the server to py3/flask and all the various bits. I’ve got the new server connected to a Redis instance while the old server is using the memcache for py2.7.

I’m thinking of connecting the 2.7 version of my server to Redis memcache since I’ve got it running as a staged migration. That way I can split traffic between the py3 server and the py2 server and they will be using the same memcache server as I test etc.. I can have a few beta users talking to the new server and co-exist with the current server.

I’ve got a version of my py2.7 server talking to redis.. but I’m finding the docs on using Redis a bit confusing for storing ndb models. It appears I can’t just populate an ndb model class directly in Redis as I did with the old legacy memcache.

In the migration docs it says you can setup redis as a "global cache" with appengine with the following example:

client = ndb.Client()
global_cache = ndb.RedisCache.from_environment()

with client.context(global_cache=global_cache):
  books = Book.query()
  for book in books:
      print(book.to_dict())

I’m not super clear on what this means. Do I need to structure all my queries this way that I want cached or is there a one time setup and then models will just auto-cache? would the above example pull from the cache automatically if it exists?

Currently in legacy memcache server I collect a bunch of model instances (or games in my case that a particular users is a member of) into a python list and cache them with a key like

cacheKey = userskey.urlsafe()+"_gamesList"

Any time a game for this user is changed in some way (updated, deleted, added..etc) I delete the cacheKey.. the next time that user queries their games list I rebuild the cache. It doesn’t look like I can store data like that in Redis and simply. Also since that user interacts with the server may times I’d just want the model that represents that user cached.. every time that user talks to my server I need to recall their user account with the key

I guess I’m just confused about global cache and how this all differs from the legacy memcache. Currently I get close to 90% cache hits with the legacy memcache so it’s certainly worth it to try and replicate this with Redis. Maybe even just a pointer to a practical example app would be helpful. Surprisingly searching around I’ve not found that.

2

Answers


  1. The term ‘global cache’ is a bit confusing, it just means the MemoryStore Redis instance can be accessed from outside of your project.
    If you want to keep using NDB, you’ll need to migrate to CloudNDB first. To answer one of your questions – No there is no auto-caching of anything with MemoryStore.

    However my suggestion here is – if all you’re doing is storing key-value pairs in Redis, why not just use the native Redis functionality?

    # Connect to Redis (I create a CloudDNS record which points to the private IP)
    pool = redis.ConnectionPool(host=<REDIS_HOST>, port=<REDIS_PORT>, db=0)
    r = redis.Redis(connection_pool=pool)
    
    

    Then you can get and set the objects as bytes

    import pickle, io
    
    f = io.BytesIO()
    pickle.dump(book.to_dict(),f)
    f.seek(0)
    
    res = r.set(alias,f.read())
    # res will be True or False
    
    outObj = r.get(alias)
    if len(outObj) > 0:
        dictObj = pickle.loads(outObj)
    else:
       # Cache Miss - get it from the DB and store it in Redis
    

    Also MemoryStore is expensive, I use FakeRedis to test locally

    import fakeredis as redis
    serv = redis.FakeServer()
    r = redis.FakeStrictRedis(server=serv)
    
    

    Update – from your comment regarding the UnicodeDecodeError, I don’t know what data is in your dictionary that is causing issues, you could try to to convert to json and for anything that is not serializable (like datetime objects), just convert them to str

    jsonObj = json.dumps(dictObj, default=str)
    f = io.BytesIO()
    pickle.dump(jsonObj,f)
    f.seek(0)
    
    
    Login or Signup to reply.
  2. As mentioned by rossco, there is no auto-caching in Memorystore Redis. In Legacy Memcache, the cache is global and is shared across the application’s frontend, backend, and all of its services and versions by default. The GlobalCache interface shares the same description. However, there are differences such as a RedisCache object (which is an implementation of GlobalCache) is created and passed on to the client context, as seen from the example you provided, contrary to how you create a Legacy Memcache client. With regards to the exposed methods, there are also differences which can seen in these following docs.

    Once the client context is initialized, all operations using that context will use the the single global cache object. Constructing more than one context using the same global_cache, or using dependency injection to pass around the same context will be one-time setup.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search