skip to Main Content

I’m having the strangest problem for days now. I took over a WordPress website of a company that was originally developed by another person – the codebase is a mess but I was able to go over it and make sure it at least is working.

The database is huge (70mb) and there is a lot of plugin dependencies on the site.

However the site works generally without issues now and I’m hosting it on an EC2 with a bitnami stack for WordPress.

The weird thing though is that everyday (for instance today morning) I check the site and it’s down … 

Service Unavailable The server is temporarily unable to service your
request due to maintenance downtime or capacity problems. Please try
again later.

Additionally, a 503 Service Unavailable error was encountered while
trying to use an ErrorDocument to handle the request.

When logging into the server with ssh and trying to restart apache I get this:

Failed to unmonitor apache: write /var/lib/gonit/state: no space left
on device Syntax OK /opt/bitnami/apache2/scripts/ctl.sh : apache not
running Syntax OK (98)Address already in use: AH00072: make_sock:
could not bind to address [::]:80 (98)Address already in use: AH00072:
make_sock: could not bind to address 0.0.0.0:80 no listening sockets
available, shutting down AH00015: Unable to open logs
/opt/bitnami/apache2/scripts/ctl.sh : httpd could not be started
Failed to monitor apache: write /var/lib/gonit/state: no space left on
device

I had this the 3rd time in 3 days now even though I restored the server from a snapshot with a volume of 200gb (for testing purposes) and all site files including uploads only have 5gb.

The site is running on an EC2 (t2.medium) with 200gb volume now and today morning I can’t restart apache. Yesterday evening when restoring from a snapshot the site works well and normal – it’s actually even fast.

I don’t know where to start investigating here. What could cause the server to run out of disc space in one night?

Thanks,
Matt


Also one of the weirdest things it seems. I reset everything yesterday eventing from an EC2 snapshot to a 200gb volume and attached it to the instance. Everything was working fine. I made some changes on the files, deleted some plugins, updated some settings.

And it seems this is all gone now. And I’m using an elastic IP, so I couldn’t connect to a wrong device or something.

3

Answers


  1. What you need to do is increase the size of partition on the disk and the size of file system on that partition. Even you increased the volume size, these figure kept unchanged. Create another from snapshot would not help too.
    Check how to do it here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

    Login or Signup to reply.
  2. Your df result shows

    Filesystem   1K—blocks       Used  Available Use% Mounted on 
    udev           2014560          0    2014560   0% /dev 
    tmpfs           404496       5872     398624   2% /run 
    /dev/xvdal    20263528   20119048     128096 100% 
    tmpfs          2022464          0    2022464   0% /dev/shm 
    tmpfs             5120          0       5120   0% /run/lock 
    tmpfs          2022464          0    2022464   0% /sys/fs/cgroup 
    /dev/loop0       18432      18432          0 100% /snap/amazon—ssm—agent/1480 
    /dev/loopl       91264      91264          0 100% /snap/core/7713 
    /dev/loop2       12928      12928          0 100% /snap/amazon—ssm—agent/295 
    /dev/loop3       91264      91264          0 100% /snap/core/7917 
    tmpfs           404496          0     404496   0% /run/user/1000 
    

    where the root volume /deb/xvda1 has only 20GB and that is marked as 100% of the volume, not 200GB as you mentioned.

    When you increase the volume size during the instance running, it is not automatically applied. In your EC2, you have to apply the change of volume as follows:

    sudo resize2fs /dev/xvda1
    

    and check the size of the volume by doing df -h then you will see the size is now 200GB.

    Login or Signup to reply.
  3. Bitnami Engineer here, you will probably need to resize the disk of your instance. But you can investigate those issues later, these commands will show the directories with large number of files

    cd /opt/bitnami
    sudo find . -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
    du -h -d 1
    

    If MySQL is the service that is taking more space, you can try adding this line under the [mysqld] block of the /opt/bitnami/mysql/my.cnf configuration file

    expire_logs_days = 7
    

    That will force MySQL to purge the old logs of the server after 7 days. You will need to restart MySQL after that:

    sudo /opt/bitnami/ctlscript.sh restart mysql
    

    More information here:

    https://community.bitnami.com/t/something-taking-up-space-and-growing/64532/7

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search