I am using PostgreSQL with Express.js
I came across a problem, that when I update my database, with new entries, and delete old ones, no matter what, it keeps returning only the old ones (that were deleted several days ago already), as if it was using cache.
I tried:
Setting headers – res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate, private');
Restarting the database itself, and using VACUUM ANALYZE
on PostgreSQL;
This is my connection to the database:
const { Pool } = require('pg');
const { dbConfig, sshConfig } = require('./database_config');
const dbPool = new Pool(dbConfig);
const connectToDatabase = () => {
dbPool.connect((error) => {
if (error) throw error;
});
};
module.exports = { dbPool, connectToDatabase };
This is one of my endpoints.
//I connect to the database in index.js
connectToDatabase();
app.get('/getEntries', async (req, res) => {
try {
const query = 'SELECT * FROM listings';
const result = await dbPool.query(query);
res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate, private');
res.json(result.rows);
} catch (error) {
console.error('Error fetching data:', error);
res.status(500).send('Error fetching data');
}
});
It keeps returning old data, deleted a while ago from the Database, how to address this issue?
2
Answers
In plain database context, this sort of thing would happen if either your bulk DML (
insert
,update
,delete
,truncate
,merge
you meant by "when I update my database") was executed but never committed, or the clients in your pool had their autocommit off/are operating in transactions, unintendedly. My bet is on the former: your separate, external process of applying the bulk change wasn’t followed by acommit
. If you did that manually from a db client and verified your changes were applied by subsequently running someselect
statements after that bulk DML, theselect
would’ve shown they are indeed visible, but that’s because changes resulting from previous statements are immediately visible within the same transaction.It could be confused identifiers: if multiple parts of your system address the same
listings
table using different capitalisation and quoting rules, you might end up with multiple different tables.Listings
,LISTINGS
,LiStInGs
andlistings
are all folded to lowercaselistings
, unless you or something processing your query wraps the identifier in double quotes – then it’s treated verbatim, and accessible only using the double-quoted name: docyou can check
information_schema.tables
to see if that’s not the caseYou could’ve also created alternate versions of you
listings
table in different namespaces/schemas. It’s especially likely if there are schemas named after users (those by default precede other schemas) and your clients operate under a different user than your big DML did. Query from point 2. should detect that as well.If you’re re-using usernames, db names and passwords between environments and you have a lot running on the same network/machine it’s possible your clients are connecting somewhere else than where you end up "via terminal". You can thoroughly examine if connection params match fully, see what’s running where, on what port. You can create a unique table through one channel and see if it’s visible via another. If not, they might be connecting different places and/or there’s transaction isolation and you’re forgetting to run
commit
.It’s possible to quietly alter visibility using row-level security or the rule system. The former acts as if there were additional, invisible conditions added to all your queries, the latter can completely re-direct your queries.
express.js
usesnode-postgres
to talk to the db, so there’s no way around autocommit except explicitly issuing aBEGIN
statement. It’s very unlikely but technically possible to poison your pool if you attempted to userepeatable read
orserializable
transaction isolation level inpool.query()
: this would randomly and quietly result in clients being released back into the pool with an open transaction, holding on to an old snapshot of the db. Each client would then be doomed to continue its life in its own, parallel universe, where all other life ceased to exist and only their own actions do anything, until they happen tocommit
orrollback
. That’s a big if, assuming clients don’t automatically attempt a commit upon release back to the pool.It looks like you are using
Cache-Control
header to prevent caching on the client side. However, the issue might not be related to client-side but with the database itself.You can try these solutions to solve the issue:
Check the Database Directly:
connect to your PostgreSQL database using a GUI tool like PGAdmin. Run the same
SELECT * FROM
listings query directly on the database to ensure that the database is returning the correct, updated data.Database Transactions:
make sure that your update and delete operations are committed properly within a transaction. When performing updates and deletes, it’s important to commit those changes to ensure they’re reflected in subsequent queries. Ensure that you’re committing your transactions after performing these operations.
Check for Errors in Update/Delete Operations:
Verify that the update.