I’m using Postgres (via Supabase) and Node (via NextJS on Vercel).
I have a table report
which has columns for an enum region
, and generated_text
which is content generated by AI.
create type region as enum (
'America',
'Europe',
'Asia',
'Africa'
);
create table report (
id serial primary key,
region region,
generated_text: text
);
A user selects their region and then purchases a report. Purchases are recorded in a join-table report_user
.
create table report_user(
id serial primary key,
report uuid NOT NULL,
user: uuid NOT NULL
)
When the user makes a purchase:
- If there is a row in the
report
table for that region which the user hasn’t purchased before then that row is returned to them. - If there isn’t a report for that region, or if the user has already purchased all of the reports for that region, then we need to create a new row. This requires a request to an AI API for the
generated_text
column which is very slow and is a bad user experience.
I’d like to improve the speed for users and this can only be done by creating report
rows before they are needed. I can bulk-create some but I don’t know which regions will be more popular (I’ve simplified my example, my actual application has many more filters and thus a lot of potential combinations).
Is there a way to run a background task to create a new report
when the user has purchased all report
rows for a certain region
filter?
2
Answers
You can use background task to get help to create new report rows when the user has purchased all report rows for a certain region filter.
You can use a task queue system like Bull, BullMQ, or Agenda, i havent tried Agenda though…. to handle background tasks.
Create a background task that generates new reports for a specific region.
This task should check if there is a need to generate a new report based on your criteria,
Then trigger the task.
After a user makes a purchase, you can trigger this background task to check if new reports are needed.
The task queue system will handle the scheduling and processing of background tasks.
You can set up a recurring job to periodically check the state of your reports and generate new ones for regions that need it.
My example using Bull.
Install bull
.Then, in your maincode:
the above is quite basic to get you started. Dont forget to handle errors.
Yes, there is. Since you didn’t mention any specifics, preferences, limitations, I’ll run with the interpretation where you mean all inside Supabase:
Supabase supports
pg_cron
extension, but to hit an API you’ll need an external app that sits connected to the db andLISTEN
s on a channel where yourpg_cron
job or even a simpletrigger
broadcasts apg_notify()
orders to top up your inventory when you run low.The trigger would be better because it can fire anytime someone fetches a report and immediately
notify
if they need a refill. A recurring, time-basedpg_cron
job runs the risk that sometimes you’ll run out and have to wait some time before it checks in to re-stock according to the schedule.In both cases it might be best to collect stats who and how often needs a refill to increase the parameter dictating how much ahead of their demand you want to stay. Depending on how the API works, what traffic you’re taking and your storage/db config, you might want to let the trigger categorise users into green, yellow and red groups, and not refill anyone’s reports when they only drop from green to yellow, but as soon as at least one runs red, run a single bulk refill for everyone in either yellow or red. One large request and one large
insert
every now and then, replacing a continuous stream of tiny ones.It’s possible to build a Steampipe plugin for the API you’re getting the reports from and call it by simply running an
insert into...select...
in the stock-monitoringtrigger
, or thepg_cron
job:or use an existing one (CohereAI, OpenAI):
2. Indulge your cravings with creamy goodness!
3. Sweeten your day with a scoop of our tasty treats!
Since you’re starting from Supabase you can’t add full-sized Steampipe to, you’d have to have Steampipe running elsewhere and link its API-caller-disguised-as-a-table through
postgres_fdw
as aforeign table
. That’s completely in-db and pure SQL, but uses two db’s.There might be a way you could build a "Steampipe postgres fdw" and add that to Supabase arriving at a self-contained, monolithic, completely in-db setup, but it seems Supabase only welcomes plain SQL or PL/pgSQL extensions from outside their pre-configured list, and the Steampipe-generated FDW is a bit more complex than that.