skip to Main Content

I am using PostgreSQL with Express.js

I came across a problem, that when I update my database, with new entries, and delete old ones, no matter what, it keeps returning only the old ones (that were deleted several days ago already), as if it was using cache.

I tried:

Setting headers – res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate, private');

Restarting the database itself, and using VACUUM ANALYZE on PostgreSQL;

This is my connection to the database:

const { Pool } = require('pg');
const { dbConfig, sshConfig } = require('./database_config');

const dbPool = new Pool(dbConfig);

const connectToDatabase = () => {
    dbPool.connect((error) => {
        if (error) throw error;
    });
};

module.exports = { dbPool, connectToDatabase };

This is one of my endpoints.

//I connect to the database in index.js
connectToDatabase();


app.get('/getEntries', async (req, res) => {
    try {
        const query = 'SELECT * FROM listings';

        const result = await dbPool.query(query);
        res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate, private');

        res.json(result.rows);
    } catch (error) {
        console.error('Error fetching data:', error);
        res.status(500).send('Error fetching data');
    }
});

It keeps returning old data, deleted a while ago from the Database, how to address this issue?

2

Answers


    1. In plain database context, this sort of thing would happen if either your bulk DML (insert, update, delete, truncate, merge you meant by "when I update my database") was executed but never committed, or the clients in your pool had their autocommit off/are operating in transactions, unintendedly. My bet is on the former: your separate, external process of applying the bulk change wasn’t followed by a commit. If you did that manually from a db client and verified your changes were applied by subsequently running some select statements after that bulk DML, the select would’ve shown they are indeed visible, but that’s because changes resulting from previous statements are immediately visible within the same transaction.

    2. It could be confused identifiers: if multiple parts of your system address the same listings table using different capitalisation and quoting rules, you might end up with multiple different tables. Listings, LISTINGS, LiStInGs and listings are all folded to lowercase listings, unless you or something processing your query wraps the identifier in double quotes – then it’s treated verbatim, and accessible only using the double-quoted name: doc

      Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case.

      you can check information_schema.tables to see if that’s not the case

      select table_schema,table_name 
      from information_schema.tables 
      where table_name ilike '%listings%';
      
    3. You could’ve also created alternate versions of you listings table in different namespaces/schemas. It’s especially likely if there are schemas named after users (those by default precede other schemas) and your clients operate under a different user than your big DML did. Query from point 2. should detect that as well.

    4. If you’re re-using usernames, db names and passwords between environments and you have a lot running on the same network/machine it’s possible your clients are connecting somewhere else than where you end up "via terminal". You can thoroughly examine if connection params match fully, see what’s running where, on what port. You can create a unique table through one channel and see if it’s visible via another. If not, they might be connecting different places and/or there’s transaction isolation and you’re forgetting to run commit.

    5. It’s possible to quietly alter visibility using row-level security or the rule system. The former acts as if there were additional, invisible conditions added to all your queries, the latter can completely re-direct your queries.

    6. express.js uses node-postgres to talk to the db, so there’s no way around autocommit except explicitly issuing a BEGIN statement. It’s very unlikely but technically possible to poison your pool if you attempted to use repeatable read or serializable transaction isolation level in pool.query(): this would randomly and quietly result in clients being released back into the pool with an open transaction, holding on to an old snapshot of the db. Each client would then be doomed to continue its life in its own, parallel universe, where all other life ceased to exist and only their own actions do anything, until they happen to commit or rollback. That’s a big if, assuming clients don’t automatically attempt a commit upon release back to the pool.

    Login or Signup to reply.
  1. It looks like you are using Cache-Control header to prevent caching on the client side. However, the issue might not be related to client-side but with the database itself.

    You can try these solutions to solve the issue:

    1. Check the Database Directly:

      connect to your PostgreSQL database using a GUI tool like PGAdmin. Run the same SELECT * FROM listings query directly on the database to ensure that the database is returning the correct, updated data.

    2. Database Transactions:

      make sure that your update and delete operations are committed properly within a transaction. When performing updates and deletes, it’s important to commit those changes to ensure they’re reflected in subsequent queries. Ensure that you’re committing your transactions after performing these operations.

    3. Check for Errors in Update/Delete Operations:

      Verify that the update.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search