I applied over a postgres USER table and unique contraint over email. The problem that I am facing now is that the constraint seems to register each value I insert (or try to insert) no matter if a record with that value exists or not.
I.e
Table:
id | user |
---|---|
1 | [email protected] |
2 | [email protected] |
if i insert [email protected], delete the value and try to insert [email protected] again it says:
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "email"
my doubt is: the unique constraint guarantees that the value is newer always or that there is only one record of that value in the column?
documentations says it the second but the experience shows is the first one
more details:
| Column | Type | Nullable |
|------------------|-----------------------------|----------|
| id | integer | not null |
| email | character varying(100) | |
| password | character varying(100) | |
| name | character varying(1000) | |
| lastname | character varying(1000) | |
| dni | character varying(20) | |
| cellphone | character varying(20) | |
| accepted_terms | boolean | |
| investor_test | boolean | |
| validated_email | boolean | |
| validated_cel | boolean | |
| last_login_at | timestamp without time zone | |
| current_login_at | timestamp without time zone | |
| last_login_ip | character varying(100) | |
| current_login_ip | character varying(100) | |
| login_count | integer | |
| active | boolean | |
| fs_uniquifier | character varying(255) | not null |
| confirmed_at | timestamp without time zone | |
Indexes:
"bondusers_pkey" PRIMARY KEY, btree (id)
"bondusers_email_key" UNIQUE CONSTRAINT, btree (email)
"bondusers_fs_uniquifier_key" UNIQUE CONSTRAINT, btree (fs_uniquifier)
Insert Statement:
INSERT INTO bondusers (email, password, name, lastname, dni, cellphone, accepted_terms, investor_test, validated_email, validated_cel, last_login_at, current_login_at, last_login_ip, current_login_ip, login_count, active, fs_uniquifier, confirmed_at) VALUES ('[email protected]', '$pbkdf2-sha256$29000$XyvlfI8x5vwfYwyhtBYi5A$Hhfrzvqs94MjTCmDOVmmnbUyf7ho4kLEY8UYUCdHPgM', 'mail', 'mail3', '123123123', '1139199196', false, false, false, false, NULL, NULL, NULL, NULL, NULL, true, '1c4e60b34a5641f4b560f8fd1d45872c', NULL);
ERROR: duplicate key value violates unique constraint "bondusers_fs_uniquifier_key"
DETAIL: Key (fs_uniquifier)=(1c4e60b34a5641f4b560f8fd1d45872c) already exists.
but when:
select * from bondusers where fs_uniquifier = '1c4e60b34a5641f4b560f8fd1d45872c';
result is 0 rows
2
Answers
I assume that if you run the INSERT, DELETE, INSERT directly within Postgres command line it works OK?
I noticed your error references SQLAlchemy (
sqlalchemy.exc.IntegrityError
), so I think it may be that and not PostgreSQL. Within a transaction SQLAlchemy’s Unit of Work pattern can re-order SQL statements for performance reasons.The only ref I could find was here https://github.com/sqlalchemy/sqlalchemy/issues/5735#issuecomment-735939061 :
So if you have the following within a single transaction:
when you commit it, it’s probably getting reordered as:
I have more experience with this problem in Java/hibernate. The SQLAlchemy docs do claim it’s unit of work pattern is "Modeled after Fowler’s "Unit of Work" pattern as well as Hibernate, Java’s leading object-relational mapper." so probably relevant here too
To supplement Ed Brook‘s insightful answer, you can work around the problem by flushing the session after deleting the record:
Another solution would be to use a deferred constraint, so that the state of the index is not evaluated until the end of the transaction:
but note, from the PostgreSQL documentation: