skip to Main Content

I’m running a fairly large sql script (aprox 300k insert statements) against a postgres database and I’m getting a ‘no space left on device’ error.

I know this error means there isn’t enough disk space for postgres to complete the operation, but the odd thing is that if I split the file into 10 pieces and run each one individually it completes without any error, and once all the rows are inserted there is still plenty of disk space left over.

The sql script creates a few tables and then has thousands of individual insert statements, and then creates some indexes at the end. I’ve left the default auto-commit behaviour in place, I know this isn’t optimal from a performance point of view but that’s not an issue for me in this case.

Does postgres create some temporary files while processing a sql script that might be causing the issue?

Not sure it’s relevant but I’m running the script via ansible, and the postgres db is in a docker container.

2

Answers


  1. Chosen as BEST ANSWER

    Ok, it turns out this was an issue with docker configuration rather than postgres. Digging though the logs I found a more detailed error message that said could not resize shared memory segment. No space left on device Increasing the shared memory for my postgres docker container from the default of 64mb up to 256mb as detailed here fixed the issue.


  2. Yes, Postgress creates temporary files, see https://adriennedomingus.com/tech/understanding-temp-files-in-postgres/

    If you run out of temp file space you could see a PGError that reads something like could not write to temporary file: No space left on device. This will cause the queries in question to fail to execute entirely. And the problem can be compounding — too many temp files generated by one query can cause another query to not have the space it needs to execute.

    So, when you get this error, that does not necessarily mean that your disk is full. It could mean that your script is about getting beyond the bounds set by Postgres settings. You can increase temp_file_limit, but it’s usually better to split your script into multiple packets so, if things fail at a certain packet and you roll back, then you don’t have to start all over again.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search