skip to Main Content

I’m using Postgres (via Supabase) and Node (via NextJS on Vercel).

I have a table report which has columns for an enum region, and generated_text which is content generated by AI.

create type region as enum (
  'America',
  'Europe',
  'Asia',
  'Africa'
);

create table report (
  id serial primary key,
  region region,
  generated_text: text
);

A user selects their region and then purchases a report. Purchases are recorded in a join-table report_user.

create table report_user(
  id serial primary key,
  report uuid NOT NULL,
  user: uuid NOT NULL
)

When the user makes a purchase:

  • If there is a row in the report table for that region which the user hasn’t purchased before then that row is returned to them.
  • If there isn’t a report for that region, or if the user has already purchased all of the reports for that region, then we need to create a new row. This requires a request to an AI API for the generated_text column which is very slow and is a bad user experience.

I’d like to improve the speed for users and this can only be done by creating report rows before they are needed. I can bulk-create some but I don’t know which regions will be more popular (I’ve simplified my example, my actual application has many more filters and thus a lot of potential combinations).

Is there a way to run a background task to create a new report when the user has purchased all report rows for a certain region filter?

2

Answers


  1. You can use background task to get help to create new report rows when the user has purchased all report rows for a certain region filter.

    You can use a task queue system like Bull, BullMQ, or Agenda, i havent tried Agenda though…. to handle background tasks.

    Create a background task that generates new reports for a specific region.
    This task should check if there is a need to generate a new report based on your criteria,
    Then trigger the task.

    After a user makes a purchase, you can trigger this background task to check if new reports are needed.

    The task queue system will handle the scheduling and processing of background tasks.
    You can set up a recurring job to periodically check the state of your reports and generate new ones for regions that need it.

    My example using Bull.
    Install bull.

    // background/tasks/reportGeneration.js
    const Queue = require('bull');
    
    const reportQueue = new Queue('reportQueue', 'redis://127.0.0.1:6379'); // Use your Redis connection string
    
    reportQueue.process(async (job) => {
      const { region } = job.data;
    
      // Check if a new report is needed for the specified region
      const isReportNeeded = checkIfReportIsNeeded(region);
    
      if (isReportNeeded) {
        // request to the AI API
        const generatedText = await fetchGeneratedTextFromAPI();
    
        // create a new report row
        await createReportRow(region, generatedText);
      }
    
      return {};
    });
    
    function checkIfReportIsNeeded(region) {
      // check if a new report is needed for the region
    }
    
    async function fetchGeneratedTextFromAPI() {
      // fetch generated_text from the AI API
    }
    
    async function createReportRow(region, generatedText) {
      // create a new report row in the database
    }
    
    module.exports = reportQueue;
    

    Then, in your maincode:

    // Trigger the background task after a user makes a purchase
    const reportQueue = require('./background/tasks/reportGeneration');
    
    // Example: Trigger the task when a user makes a purchase
    const userPurchaseHandler = async (region) => {
      const job = await reportQueue.add({ region });
      console.log(`Report generation task added for region: ${region}, job ID: ${job.id}`);
    };
    
    // Call this function when a user makes a purchase
    userPurchaseHandler('Europe');
    

    the above is quite basic to get you started. Dont forget to handle errors.

    Login or Signup to reply.
  2. Is there a way to run a background task to create…when…

    Yes, there is. Since you didn’t mention any specifics, preferences, limitations, I’ll run with the interpretation where you mean all inside Supabase:

    1. Supabase supports pg_cron extension, but to hit an API you’ll need an external app that sits connected to the db and LISTENs on a channel where your pg_cron job or even a simple trigger broadcasts a pg_notify() orders to top up your inventory when you run low.

      The trigger would be better because it can fire anytime someone fetches a report and immediately notify if they need a refill. A recurring, time-based pg_cron job runs the risk that sometimes you’ll run out and have to wait some time before it checks in to re-stock according to the schedule.

      In both cases it might be best to collect stats who and how often needs a refill to increase the parameter dictating how much ahead of their demand you want to stay. Depending on how the API works, what traffic you’re taking and your storage/db config, you might want to let the trigger categorise users into green, yellow and red groups, and not refill anyone’s reports when they only drop from green to yellow, but as soon as at least one runs red, run a single bulk refill for everyone in either yellow or red. One large request and one large insert every now and then, replacing a continuous stream of tiny ones.

    2. It’s possible to build a Steampipe plugin for the API you’re getting the reports from and call it by simply running an insert into...select... in the stock-monitoring trigger, or the pg_cron job:

      insert into report(region,generated_text) 
      select 'Europe',generated_text
      from your_steampipe_wrapper_for_the_AI_api;
      

      or use an existing one (CohereAI, OpenAI):

      select completion
      from openai_completion
      where prompt = 'Write a tagline for an ice cream shop.';
      
      completion
      1. Cool down with a scoop of our delicious ice cream!
      2. Indulge your cravings with creamy goodness!
      3. Sweeten your day with a scoop of our tasty treats!

      Since you’re starting from Supabase you can’t add full-sized Steampipe to, you’d have to have Steampipe running elsewhere and link its API-caller-disguised-as-a-table through postgres_fdw as a foreign table. That’s completely in-db and pure SQL, but uses two db’s.

    3. There might be a way you could build a "Steampipe postgres fdw" and add that to Supabase arriving at a self-contained, monolithic, completely in-db setup, but it seems Supabase only welcomes plain SQL or PL/pgSQL extensions from outside their pre-configured list, and the Steampipe-generated FDW is a bit more complex than that.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search