I have an app with Google Map, on each zoom change or change in map boundaries, I do a call on the backend to get points in boundaries. We have a large database, about 3 million records, so it take some time and resources on the NodeJS backend to query them with filters and clusterize.
If user going back and forth he generate tons of queries, so I wrote a script that aborts the previous query if we do a new one
import React, { useState, useRef } from 'react'
const controllerRef = useRef()
const getAddresses = useCallback(async (body) => {
setLoading(true)
if (controllerRef.current) controllerRef.current.abort()
const controller = new AbortController()
controllerRef.current = controller
const signal = controllerRef.current?.signal
try {
const { clusters } = await request(`/api/address/`, 'POST', body, {
Authorization: `Bearer ${token}`
}, signal)
controllerRef.current = null
setLoading(false)
} catch (e) {
console.log('Error :>> ', e);
}
}, [token, request])
The question is – what happens on the backend? Is each query still calculating by NodeJS or it’s aborting there as well, because the client drops the connection?
If it helps we are using MySQL with sequelize. Is there a chance that if I abort frontend query that MySQL query aborts as well?
I’m thinking about maybe I need not just cancel the query on frontend, but also make another query that flag NodeJS to cancel calculations.
Main thing is optimizing backend loading, because each query is really heavy.
2
Answers
To answer your question, no this is not possible especially if your back-end already queried your SQL server.
In your case to be honest i’ll put most of the work on the front-end, do you know the notion of debounce & throttling ?
I recently had a similar problem and have been interested in this topic ever since. I currently have a weird and probably inefficient solution in mind. Basically, you would have to work with a global value, such as a session variable or a value stored in the DB. Then, during the execution of the instructions on the server side (a search, an update, etc.), a loop would be triggered at the same time, where, in each iteration, it will be verified if our global value has changed. That global value could be 1 to continue and 0 to end the loop (and the script execution).
Then, on the client side, a second request can be trigger to cancel the first one, for which the request would be made to another file on the server side where the global value would be changed from 1 to 0, in such a way that , when in the loop of the first file it’s verified that the global value is no longer 1 but 0, the entire execution that is being carried out would be ended.
Finally, the global value would be reset to 1, which would be the initial value that will allow the execution of future requests.
This is what I have thought and tried. It’s not an elegant or efficient solution, but I haven’t found another way to handle complete cancellation of a request, both on the client and server sides.