I have this table
CREATE TABLE IF NOT EXISTS public.campaign_result
(
id integer NOT NULL DEFAULT nextval('campaign_result_id_seq'::regclass),
state result_state NOT NULL DEFAULT 'processing'::result_state,
data json NOT NULL,
response json,
act result_action NOT NULL DEFAULT 'none'::result_action,
message character varying COLLATE pg_catalog."default",
campaign_id uuid NOT NULL,
created_at timestamp with time zone DEFAULT now(),
CONSTRAINT campaign_result_pkey PRIMARY KEY (id),
CONSTRAINT campaign_result_campaign_id_fkey
FOREIGN KEY (campaign_id)
REFERENCES public.campaign (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE
)
This table has almost 1200 entries, but when I do inserts into it, I get this error:
Failed due to duplicate key value violates unique constraint "campaign_result_pkey"
DETAIL: Key (id)=(137) already exists.
This is how I am inserting in it:
def campaing_result_insert(item, messsage, campaign_id):
conn = connect_db()
with conn:
cur = conn.cursor()
cur.execute("""INSERT INTO campaign_result(data, Campaign_id, message) VALUES (%s::json, %s, %s) RETURNING id, data;""",
(item, campaign_id, messsage))
return cur.fetchone()
That’s really a weird issue. What is the problem?
2
Answers
As you guys helped in the comments change the value for
campaign_result_id_seq
was simple. I used pgadmin4 followed these steps to change its value 1campaign_result_id_seq
got to propertiesDefinition
tab and change the current value.The following query synchronizes the sequence with the table ID values:
I’ve often used this approach when bulk loading data for testing.