skip to Main Content

I have noticed that recently, my containers will randomly stop and restart while using docker. I don’t know if this is an issue memory or storage. I seem to have enough of both. Here is an example coming from a postgresql database

airs_prod_postgres | 2021-10-05 10:18:24.313 UTC [1] LOG:  starting PostgreSQL 13.4 

(Debian 13.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit          
airs_prod_postgres | 2021-10-05 10:18:24.313 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432                                                                                     
airs_prod_postgres | 2021-10-05 10:18:24.313 UTC [1] LOG:  listening on IPv6 address "::", port 5432                                                                                          
airs_prod_postgres | 2021-10-05 10:18:24.368 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"                                                                       
airs_prod_postgres | 2021-10-05 10:18:24.466 UTC [26] LOG:  database system was shut down at 2021-10-05 10:17:53 UTC                                                                          
airs_prod_postgres | 2021-10-05 10:18:24.593 UTC [1] LOG:  database system is ready to accept connections                                                                                     
airs_prod_postgres | 2021-10-05 10:21:42.549 UTC [114] ERROR:  duplicate key value violates unique constraint "cron_name_unique"                                                              
airs_prod_postgres | 2021-10-05 10:21:42.549 UTC [114] DETAIL:  Key (name)=(CORE__MQTT_CACHE_CLEAR) already exists.                                                                           
airs_prod_postgres | 2021-10-05 10:21:42.549 UTC [114] STATEMENT:  insert into "Cron" ("Plugin", "data", "day_of_month", "day_of_week", "hour", "id", "minute", "month", "name", "second", "ty
pe") values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)                                                                                                                                    
airs_prod_postgres | 2021-10-05 10:23:12.765 UTC [1] LOG:  received fast shutdown request                                                                                                     
airs_prod_postgres | 2021-10-05 10:23:12.776 UTC [1] LOG:  aborting any active transactions                                                                                                   
airs_prod_postgres | 2021-10-05 10:23:12.777 UTC [122] FATAL:  terminating connection due to administrator command                                                                            
airs_prod_postgres | 2021-10-05 10:23:12.777 UTC [157] FATAL:  terminating connection due to administrator command                                                                            
airs_prod_postgres | 2021-10-05 10:23:12.778 UTC [158] FATAL:  terminating autovacuum process due to administrator command                                                                    
airs_prod_postgres | 2021-10-05 10:23:12.778 UTC [121] FATAL:  terminating connection due to administrator command                                                                            
airs_prod_postgres | 2021-10-05 10:23:12.781 UTC [120] FATAL:  terminating connection due to administrator command                                                                            
airs_prod_postgres | 2021-10-05 10:23:12.783 UTC [118] FATAL:  terminating connection due to administrator command                                                                            
airs_prod_postgres | 2021-10-05 10:23:12.795 UTC [116] FATAL:  terminating connection due to administrator command
airs_prod_postgres | 2021-10-05 10:23:12.796 UTC [114] FATAL:  terminating connection due to administrator command
airs_prod_postgres | 2021-10-05 10:23:12.798 UTC [91] FATAL:  terminating connection due to administrator command
airs_prod_postgres | 2021-10-05 10:23:12.799 UTC [83] FATAL:  terminating connection due to administrator command
airs_prod_postgres | 2021-10-05 10:23:12.801 UTC [82] FATAL:  terminating connection due to administrator command
airs_prod_postgres | 2021-10-05 10:23:12.813 UTC [43] FATAL:  terminating connection due to administrator command
airs_prod_postgres | 2021-10-05 10:23:12.923 UTC [1] LOG:  background worker "logical replication launcher" (PID 32) exited with exit code 1
airs_prod_postgres | 2021-10-05 10:23:12.949 UTC [27] LOG:  shutting down                      
airs_prod_postgres | 2021-10-05 10:23:13.123 UTC [1] LOG:  database system is shut down

And this is happening to all of my containers. Any idea why?

Another container logs (nginx)

client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down
client_1  | 2021/10/05 10:23:13 [notice] 31#31: gracefully shutting down
client_1  | 2021/10/05 10:23:13 [notice] 31#31: exiting
client_1  | 2021/10/05 10:23:13 [notice] 31#31: exit
client_1  | 2021/10/05 10:23:13 [notice] 33#33: gracefully shutting down
client_1  | 2021/10/05 10:23:13 [notice] 33#33: exiting
client_1  | 2021/10/05 10:23:13 [notice] 33#33: exit
client_1  | 2021/10/05 10:23:13 [notice] 34#34: gracefully shutting down
client_1  | 2021/10/05 10:23:13 [notice] 34#34: exiting
client_1  | 2021/10/05 10:23:13 [notice] 34#34: exit
client_1  | 2021/10/05 10:23:13 [notice] 32#32: gracefully shutting down
client_1  | 2021/10/05 10:23:13 [notice] 32#32: exiting
client_1  | 2021/10/05 10:23:13 [notice] 32#32: exit
client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 17 (SIGCHLD) received from 31
client_1  | 2021/10/05 10:23:13 [notice] 1#1: worker process 31 exited with code 0
client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 29 (SIGIO) received
client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 17 (SIGCHLD) received from 32
client_1  | 2021/10/05 10:23:13 [notice] 1#1: worker process 32 exited with code 0
client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 29 (SIGIO) received
client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 17 (SIGCHLD) received from 34
client_1  | 2021/10/05 10:23:13 [notice] 1#1: worker process 34 exited with code 0
client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 29 (SIGIO) received
client_1  | 2021/10/05 10:23:13 [notice] 1#1: signal 17 (SIGCHLD) received from 33
client_1  | 2021/10/05 10:23:13 [notice] 1#1: worker process 33 exited with code 0
client_1  | 2021/10/05 10:23:13 [notice] 1#1: exit
client_1  | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
client_1  | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
client_1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
client_1  | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
client_1  | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
client_1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
client_1  | 2021/10/05 10:24:10 [notice] 1#1: using the "epoll" event method
client_1  | 2021/10/05 10:24:10 [notice] 1#1: nginx/1.20.1
client_1  | 2021/10/05 10:24:10 [notice] 1#1: built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1) 
client_1  | 2021/10/05 10:24:10 [notice] 1#1: OS: Linux 4.18.0-305.19.1.el8_4.x86_64
client_1  | 2021/10/05 10:24:10 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
client_1  | 2021/10/05 10:24:10 [notice] 1#1: start worker processes
client_1  | 2021/10/05 10:24:10 [notice] 1#1: start worker process 31
client_1  | 2021/10/05 10:24:10 [notice] 1#1: start worker process 32
client_1  | 2021/10/05 10:24:10 [notice] 1#1: start worker process 33
client_1  | 2021/10/05 10:24:10 [notice] 1#1: start worker process 34
client_1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
client_1  | /docker-entrypoint.sh: Configuration complete; ready for start up

New finding, it seems to happen everyday at around 10:30 ish but not consistently, seems to be some sort of cron job so I’ll need to look into it. Sadly, nothing in my cronjobs

2

Answers


  1. Chosen as BEST ANSWER

    I am an idiot. I had automatic updates being applied daily at 6 in the morning. Gonna leave this posted even though this completely destroys my ego. You should definitely check if your updates will force reboot your system and save yourself many many hours.


  2. I have encountered this problem recently, and the problem in my case was that i installed docker with snap, instead of the way instructed in the official website.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search