I am building a function to insert a record into an audit table every time a record is inserted, updated, or deleted in the parent table.
I am struggling to insert one JSON object that is the primary keys of the parent table and their values. The parent table must have primary keys, but the number of primary keys varies. The goal is to have this JSON object be a unique identifier for the parent row in the audit table so we can prune up to the last X number of operations per record in the parent table.
Parent table:
CREATE TABLE parent_table (
id int4 NOT NULL,
textfield text NULL,
id2 int4 NOT NULL,
CONSTRAINT test_audit_table_pk PRIMARY KEY (id,id2)
);
CREATE TRIGGER t_insert_upate_delete AFTER INSERT OR DELETE OR UPDATE ON
parent_table FOR EACH ROW EXECUTE FUNCTION audit.audit_trigger();
INSERT INTO parent_table(id, id2, textfield)
VALUES (9, 10, 'TODAY');
Working pl/pgsql in the trigger function
CREATE OR REPLACE FUNCTION audit_trigger()
RETURNS trigger
LANGUAGE plpgsql
SECURITY DEFINER
AS $function$
DECLARE
audit_table_schema VARCHAR = 'audit';
audit_table_name VARCHAR = TG_TABLE_SCHEMA || '__' || TG_TABLE_NAME || '_history';
audit_index_name VARCHAR = audit_table_name || '_etl_modified_timestamp_idx';
stmt VARCHAR;
BEGIN
-- Execute a CREATE IF NOT EXISTS statement for the audit table
EXECUTE format($$
CREATE TABLE IF NOT EXISTS %1$I.%2$I (
tabname text NULL,
schemaname text NULL,
operation text NULL,
new_val jsonb NULL,
old_val jsonb NULL,
updated_cols text NULL,
primary_keys jsonb NULL, -- INSERTING INTO THIS ROW IS MY ISSUE
etl_modified_timestamp timestamptz NULL)$$,
audit_table_schema, audit_table_name);
IF TG_OP = 'INSERT' THEN
-- This is the query that needs to be updated to also insert one JSON object of the parent table's primary keys and their values into the `primary_keys` column of the audit table.
-- Expected JSON object to insert would take the form {"id": 9, "id2": 10}
stmt = format($$
INSERT INTO %1$I.%2$I (
tabname,
schemaname,
operation,
new_val,
-- primary_keys, Can't get this to work
etl_modified_timestamp
)
VALUES ($1, $2, $3, TO_JSONB($4),CURRENT_TIMESTAMP)$$, audit_table_schema, audit_table_name);
ELSIF TG_OP = 'UPDATE' AND akeys(hstore(NEW.*) - hstore(OLD.*)) != akeys(hstore('')) THEN
-- Simiar INSERT statement
RETURN NULL;
ELSIF TG_OP = 'DELETE' THEN
-- Similar INSERT statement
RETURN NULL;
END IF;
EXECUTE stmt USING TG_TABLE_NAME, TG_TABLE_SCHEMA, TG_OP, NEW, OLD;
RETURN NULL;
END;
$function$
;
This pl/pgsql
code works to extract the primary keys and loop over them:
determine_pks_stmt = $$
SELECT kcu.column_name AS pk
FROM
information_schema.table_constraints tc
JOIN
information_schema.key_column_usage kcu
ON
tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
WHERE
tc.constraint_type = 'PRIMARY KEY'
AND tc.table_name = $1
AND tc.table_schema = $2
$$;
RAISE NOTICE 'determine_pks_stmt: %', determine_pks_stmt;
FOR pks IN EXECUTE determine_pks_stmt USING TG_TABLE_NAME, TG_TABLE_SCHEMA
LOOP
RAISE NOTICE 'pks.pk: %', pks.pk;
-- But how do I build up the query statement to extract the JSON here?
END LOOP;
Failed efforts:
json_pks_stmt VARCHAR = '';
FOR pks IN EXECUTE determine_pks_stmt USING TG_TABLE_NAME, TG_TABLE_SCHEMA
LOOP
-- Goal is to create something like this which normally works in SQL
-- json_build_object('id', new_val->'id', 'id2', new_val->'id2')
IF counter != 0 THEN
json_pks_stmt = json_pks_stmt || ', ' ;
END IF;
json_pks_stmt = json_pks_stmt || pks.pk || ', ' || TO_JSONB(NEW)-> || pks.pk ;
counter = counter + 1;
END LOOP;
json_pks_stmt = json_pks_stmt || ')';
But I’m struggling with syntax errors – thank you
2
Answers
I never found out how to make it work with
jsonb
, but switched to usinghstore
column type instead and built up a string convertible to hstore.Disadvantages
hstore
only stores and represents values as text rather than other data types (such as int)hstore
column uses a different syntax than the existing jsonb columnsYou seem to be looking for
that will build an SQL expression like
json_build_object('id', NEW.id, 'id2', NEW.id2)
.