With the task in mind – do a HGETALL on all keys matching a pattern (potentially millions).
I get keys on one connection and execute HGETALL concurrently on another connection. This still does not solve the HGETALL round trip latency, which I would like to completely get rid of.
What I want to do is push HGETALL requests with an in-flight window like there is no tomorrow.
I know I can do a single request with multiple HGETALL-s in it*, but then I still need to wait for a response and once in a while pay that latency.
*- although I still yet to figure out how to shape the response type for such request so it is not statically sized.
Is there a better way?
I am using coroutines syntax, so the code now would look like:
auto request = redis::request{};
request.push("HGETALL", key);
auto response = redis::response<std::vector<std::string>>{};
co_await conn->async_exec(request, response, asio::deferred);
Thanks!
2
Answers
Looking at things, you can probably get everything you want from
redis::generic_response
. It’s gonna be some work, but if you know your target application and the commands are all the same you will probably be able to make it work without too much effort:It might be advantageous for the interpretation side to batch in a transaction (I’m not a Redis expert, so I have no idea whether this hurts server performance):
Using simple formatters to display the structure of the responses:
Live On Coliru
With a local demo:
I couldn’t find a solution using boost::redis::generic_response. Hence I used boost::redis::response itself to read responses for pipelined requests. I used parameterization of tuples as shown here: Parameterize of tuple with repeated type. My code looks like the following:
The following is a snippet from a google test case demonstrating pipelining and getting response using boost::redis::response