skip to Main Content

I have the following table:

┌────────────────┬─────────────────────────────┬───────────┬──────────┬──────────────────────────────────────────────────────────────────┐
│     Column     │            Type             │ Collation │ Nullable │                             Default                              │
├────────────────┼─────────────────────────────┼───────────┼──────────┼──────────────────────────────────────────────────────────────────┤
│ id             │ bigint                      │           │ not null │ nextval('"HistoricalDataAggregatorWorkOrders_id_seq"'::regclass) ││
│ inputTimeRange │ tstzrange                   │           │ not null │                                                                  │
│ outputTags     │ tag[]                       │           │ not null │                                                                  │
└────────────────┴─────────────────────────────┴───────────┴──────────┴──────────────────────────────────────────────────────────────────┘

I want to bulk insert a bunch of rows into it in one parameterized query. I thought I’d be able to do it the usual way with unnest:

INSERT INTO "HistoricalDataAggregatorWorkOrders"
  ("inputTimeRange", "outputTags")
  SELECT * from unnest($1::tstzrange[], $2::tag[][])
  RETURNING id;

But unfortunately, unnest doesn’t work the way I want (it flattens the array).

I gather from https://stackoverflow.com/a/8142998/200224 that there’s no builtin unnest-one-level function.

Is there really no simple way to bulk insert with a parameterized query when one column is an array type? Seems like a missing feature…

2

Answers


  1. Chosen as BEST ANSWER

    I don't love it but I went with a jsonb[] parameter:

    INSERT INTO "HistoricalDataAggregatorWorkOrders"
      ("inputTimeRange", "outputTags")
      SELECT (o->>'inputTimeRange')::tstzrange, (o->>'outputTags')::tag[]
        FROM unnest($1::jsonb[]) u(o)
      RETURNING id;
    

    Update: realized I could unnest a text[] and then cast an element of that to ::tag[]:

    INSERT INTO "HistoricalDataAggregatorWorkOrders"
      ("inputTimeRange", "outputTags")
      SELECT "inputTimeRange", "outputTags"::tag[]
        FROM unnest($1::tstzrange[], $2::text[])
          u("inputTimeRange", "outputTags")
      RETURNING *;
    

    It still doesn't feel that clean...if there isn't a cleaner way to do this with builtin features, then a way should be added...

    Ideally I'd like a way to leave serialization of the parameters completely up to the db driver (node-pg, which may have limitations.)

    If anyone knows a more elegant way to do it, let me know.


  2. If the lengths of both arrays are designed to match, you can unpack just one, requesting each element to also present its index (with ordinality clause), then use that to address elements in the other array.
    demo at db<>fiddle

    PREPARE myinsert(tstzrange[],tag[][]) AS
    INSERT INTO "HistoricalDataAggregatorWorkOrders"
      ("inputTimeRange", "outputTags")
    SELECT "inputTimeRange"
          ,array(select unnest(($2::tag[][])[i:i]))
    FROM unnest($1::tstzrange[]) with ordinality as u("inputTimeRange",i)
    RETURNING id;
    

    Since SQL arrays can only return either individual elements or whole slices of equal dimensionality as the whole input array (not arrays of N-K dimension when you skip K subscripts), array(select unnest(<slice>) has to peel that one dimension away – otherwise you’d get a "nested" 2D {{(abc),(def)}} instead of a 1D {(abc),(def)} you want.

    execute myinsert( '{ "[2024-05-05T01:00,2024-05-05T02:00]"
                        ,"[2024-05-05T03:00,2024-05-05T04:00]"}'
                     ,'{ {"(abc)","(def)"}
                        ,{"(ghi)","(jkl)"} }');
    
    id inputTimeRange outputTags
    1 ["2024-05-05 01:00:00+01","2024-05-05 02:00:00+01"] {(abc),(def)}
    2 ["2024-05-05 03:00:00+01","2024-05-05 04:00:00+01"] {(ghi),(jkl)}

    The two unnest calls should perform comparably to the single call for two arrays at once but the null-padding will only work one way, for larger/longer $1, compared to the single multi-array unnest:

    If the arrays are not all the same length then the shorter ones are padded with NULLs.

    Except for the fact it saves your from switching to jsonb and back or going through the text[]::tag[] casts, it’s still not much cleaner than what you had already.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search