skip to Main Content

I have an Ubuntu ec2 instance running nginx and Jenkins. There is no more space available to do updates, and every command I try to free up space doesn’t work. Furthermore, when trying to reach Jenkins I’m getting 502 Bad Gateway.

When I run sudo apt-get update I get a long list of errors but the main one that stood out was E: Write error - write (28: No space left on device)

I have no idea why there is no more space, or what caused it but df -h gives the following output:

Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           394M  732K  393M   1% /run
/dev/xvda1       15G   15G     0 100% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop1       56M   56M     0 100% /snap/core18/1988
/dev/loop3       34M   34M     0 100% /snap/amazon-ssm-agent/3552
/dev/loop0      100M  100M     0 100% /snap/core/10958
/dev/loop2       56M   56M     0 100% /snap/core18/1997
/dev/loop4      100M  100M     0 100% /snap/core/10908
/dev/loop5       33M   33M     0 100% /snap/amazon-ssm-agent/2996
tmpfs           394M     0  394M   0% /run/user/1000

I tried to free up the space by running sudo apt-get autoremove and it gave me E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.

I ran sudo dpkg --configure -a and got pkg: error: failed to write status database record about 'libexpat1-dev:amd64' to '/var/lib/dpkg/status': No space left on device

Lastly, I ran sudo apt-get clean; sudo apt-get autoclean and it gave me the following errors:

Reading package lists... Error!
E: Write error - write (28: No space left on device)
E: IO Error saving source cache
E: The package lists or status file could not be parsed or opened.

Any help to free up space and get the server running again will be greatly appreciated.

2

Answers


  1. For a server (If you’re not testing Jenkins & Nginx) running Jenkins & Nginx you must manage the disk partition in a better way. Following are the few possible ways to fix your issue.

    1. Expand the existing EC2 root EBS volume size from 15 GB to a higher value from the AWS EBS console.

      OR

    2. Find out the files consuming the high disk space and remove them if not required. Most probably log files consuming the disk spaces. You can execute the following commands to find out the locations that occupied with more space.

    cd /      
    du -sch * | grep G
    

    OR

    1. Add extra EBS volume to your instance and mount it to Jenkins home directory, or to the location where more disk space is using.
    Login or Signup to reply.
  2. In my case, I have an app with nginx, postgresql and gunicorn all containerized. I followed those steps to solve my issue,

    1. I tried to figure out which files are consuming my storage the most using command below:
    sudo find / -type f -size +10M -exec ls -lh {} ;
    
    1. As you can see from the screenshot, it turns of that unused and docker related containers are the source

    enter image description here

    1. I then purge all unused, stopped or dangling images: docker system prune -a

    I was able to reclaim about 4.4 GB at the end!

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search