I’m just getting into openresty/lua/nginx and I’m not 100% clear on global scope within the nginx/openresty universe.
I’d like to load a small mysql table which contains approximately 1000 records (one column) into "memory" (as an array or map) every minute and have the data readable by all requests in content_by_lua_block.
The table contains a single row which is a unique "session". I’d like to refresh the table every minute (by performing a mysql query in a master thread) and allow web requests to be able to access/search the array via content_by_lua_block.
To explain further, the session data is replicated from a mysql master server to the openresty server. It contains session cookies for all logged in users on the origin server. The openresty/nginx configs check to see if the cookie specified within the request is a valid session cookie (by looking it up in the "memory" table we pull) and serves a request based on whether or not the session cookie is valid.
I’m already doing this by performing a mysql connect + query to the localhost on every request and serve content based on whether or not a valid session cookie is passed, but I’d like to see if its possible to make it more efficient by keeping all of the session data in memory and refreshing it every minute via mysql query.
thank you!
2
Answers
You can use ngx.shared.DICT to cache data for use from multiple nginx processes. Check official document for how to store data with it. Storing data in this dict should first serialize your data to string.
data-sharing-within-an-nginx-worker
So you may create a Lua module and share your table in this module.
Keep in mind – this works per worker.
You should start a timer in
init_worker_by_lua_block
to periodically poll database server.I wouldn’t recommend dedicating a worker for polling – you would have a headache with the syncing table between workers. Once per minute per worker IMO is acceptable.