I want to verify the data in some Postgres dumps.
I’ve created a script that imports the dump into a fresh Postgres DB, and verifies the data.
This runs as a cronjob in Kubernetes.
The Cronjob contains two containers. The default container, running the script, and a postgresql container that is the database.
jobTemplate:
spec:
template:
spec:
containers:
- name: verifier
image: "verifier:v1.0.2"
env:
# ...
- name: postgres
image: postgres:16.1
env:
- name: POSTGRES_PASSWORD
value: test
This executes fine, but when the verifier script is done it is killed. While the postgresql container keeps running.
The pod remains in state Not Ready
verifier-28577575-zq24s 1/2 NotReady 0 57m
I’ve tried adding activeDeadlineSeconds: 720
, but it does not seem to trigger.
Is there a way to take down the remaining containers when the default container has completed execution?
2
Answers
This was resolved using a liveness probe. Using a shared volume between the containers, and having the postgresql container using
cat file
as livenessprobe. The verifier script creates a file, and then removes it when it is done executing. The postgresql container will then die when doing the next check:verifier.sh script
To my knowldege, there is no such way. We are talking of a single pod. As long as one container is running, the pod is considered active. What I would do to solve this problem is to have my verifier image be based on
postgres:16.1
and run my verification script there and shutdown Postgres after my checks.Actually, you do not even have to that, since you can simply mount your
*.sql[.gz]
into/docker-entrypoint-initdb.d
– if something goes wrong, the container will fail. Using prometheus and the Job’s status, you can do the alerting.